Mobile Technology

Mobile technology is a collective term used to describe the various types of cellular communication technology. Mobile CDMA technology has evolved quite rapidly over the past few years. Since the beginning of this millennium, a standard mobile device has gone from being no more than a simple two-way pager to being a cellular phone, GPS navigation system, an embedded web browser, and Instant Messenger client, and a hand-held video gaming system. Many experts argue that the future of computer technology rests in mobile/wireless computing.

The United States Military is now using Mobile technology as a tool for information dissemination and collection in the battlefield arena. "Numerous agencies including the Department of Defense (DoD), Department of Homeland Security (DHS), Intelligence community, and law enforcement are utilizing mobile technology are utilizing mobile technology for information management."

Contents

4G networking

One of the most important features in the 4G mobile networks is the domination of high-speed packet transmissions or burst traffic in the channels. The same codes used in the 2G-3G networks will be applied to future 4G mobile or wireless networks, the detection of very short bursts will be a serious problem due to their very poor partial correlation properties. Recent study has indicated that traditional multi-layer network architecture based on the OSI model may not be well suited for 4G mobile network, where transactions of short packets will be the major part of the traffic in the channels. As the packets from different mobiles carry completely different channel characteristics, the receiver should execute all necessary algorithms, such as channel estimation, interactions with all upper layers and so on, within a very short time to make the detections of each packet flawless and even to reduce the clutter of traffic.

Operating systems

There are many types of Smartphone operating systems available, including: Symbian, Android, Blackberry, WebOS, Apple iOS, Windows Mobile Professional (touch screen), Windows Mobile Standard (non-touch screen) and Bada OS. Among the most popular are the Apple iPhone, and the newest - Android. Android is a mobile operating system (OS) developed by Google. Android is the first completely open source mobile OS, meaning that it is free to any cell phone carrier. The Apple iPhone, which has several OSs like the 3G and 3G S, is the most popular smart phone at this time, because of its customizable OS which you can use to download applications ("apps") made by Apple like games, GPS, Utilities, and other tools. Any user can also create their own Apps and publish them to Apple's App Store. The Palm Pre using WebOS has functionality over the Internet and is able to support Internet-based programming languages such as CSS, HTML, and JavaScript. The BlackBerry RIM is a SmartPhone that has a multimedia player and third-party software installation. The Windows Mobile Professional Smartphones (Pocket PC or Windows Mobile PDA) are like that of a PDA and have touchscreen capabilities. The Windows Mobile Standard does not have a touch screen but uses a trackball, touchpad, rockers, etc.

Symbian is the original smartphone OS, with the richest history and the largest marketshare. Although no single Symbian device has sold as many units as the iPhone, Nokia and other manufacturers (currently including Sony Ericsson and Samsung, and previously Motorola) release a wide variety of Symbian models every year which gives Symbian the greatest marketshare.

Channel hogging & file sharing

There will be a hit to file sharing, the normal web surfer would want to look at a new web page every minute or so at 100kbs you could get your page pretty quickly. Because of the changes to the security of wireless networks you will not be able to do huge file transfers because the service providers want to cut down on channel hogging. AT&T claimed that they would ban any of their users that they caught using "peer-to-peer" (P2P) file sharing applications on their 3G network. It then became apparent that it would keep any of their users from using their iTunes programs. The users would then be forced to find a Wi-Fi hotspot in order to be able to download their music. The limitations of wireless networking will not be cured by 4G, as there are simply too many fundamental differences between wireless networking and other means of Internet access. If wireless vendors do not realize these differences and bandwidth limitations, future wireless customers will find themselves quite disappointed and the market will suffer quite a setback.

Cloud Computing


Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a metered service over a network cloud (typically the Internet).


Cloud computing logical diagram


Cloud computing is a marketing term for technologies that provide computation, software, data access, and cloud services that do not require end-user knowledge of the physical location and configuration of the cloud that delivers the services.
Also, it is a delivery model for IT clouds, the services based on Internet protocols, and it typically involves provisioning of dynamically scalable and often virtualized clouds. Clouds are formed due to the ease-of-access to remote computing sites provided by the Internet (The biggest cloud of all). This may take the form of web-based tools or applications that users can access and use through a cloud web browser as if the programs were installed locally on their own cloud-puters.
Cloud computing providers deliver applications via the internet cloud, which are accessed from web browsers and desktop and mobile apps, while the business software and data clouds are stored on servers at a remote location. In some cases, legacy lake applications (line of business applications that until now have been prevalent in thin client Windows computing) are delivered via a screen-sharing technology, while the computing resources are consolidated at a remote data centre location; (evaporation) in other cases, entire business applications have been coded using cloud technologies such as AJAX.
The sweat of cloud computing is the broader concept of infrastructure convergence (or Converged Infrastructure) and shared services. This type of data cloud environment allows enterprises to get their applications transpiring faster, with easier manageability and less maintenance, and enables IT to more rapidly adjust IT resources (such as server clouds, storage clouds, and networking clouds) to meet fluctuating and unpredictable cloud demand.
Most cloud computing infrastructures consist of services which percolate through shared data centres, which appear to consumers as a single point of access for their precipitation needs. Commercial offerings may be required to meet service-level agreements (SLAs), but specific terms are less often negotiated by smaller companies.
The tremendous impact of cloud computing on dessicated businesses has prompted the United States federal government to look towards seeding clouds as a means to wash the detritus of its IT infrastructure and to decrease IT budgets. With the advent of the top government officially mandating cloud adoption, the effect is expected to trickle-down, and many government agencies already have at least one or more cloud systems online.

Contents

Comparison

Cloud computing shares characteristics with:

Characteristics

Cloud computing exhibits the following key characteristics:
  • Empowerment of end-users of computing resources by putting the provisioning of those resources in their own control, as opposed to the control of a centralized IT service (for example)
  • Agility improves with users' ability to re-provision technological infrastructure resources.
  • Application programming interface (API) accessibility to software that enables machines to interact with cloud software in the same way the user interface facilitates interaction between humans and computers. Cloud computing systems typically use REST-based APIs.
  • Cost is claimed to be reduced and in a public cloud delivery model capital expenditure is converted to operational expenditure. This is purported to lower barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house).
  • Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
  • Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:
    • Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
    • Peak-load capacity increases (users need not engineer for highest possible load-levels)
    • Utilisation and efficiency improvements for systems that are often only 10–20% utilised.
  • Reliability is improved if multiple redundant sites are used, which makes well-designed cloud computing suitable for business continuity and disaster recovery.
  • Scalability and Elasticity via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads.
  • Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.
  • Security could improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than other traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. However, the complexity of security is greatly increased when data is distributed over a wider area or greater number of devices and in multi-tenant systems that are being shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users' desire to retain control over the infrastructure and avoid losing control of information security.
  • Maintenance of cloud computing applications is easier, because they do not need to be installed on each user's computer.

History

The term "cloud" is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the telephone network, and later to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents.
Cloud computing is a natural evolution of the widespread adoption of virtualisation, service-oriented architecture, autonomic, and utility computing. Details are abstracted from end-users, who no longer have need for expertise in, or control over, the technology infrastructure "in the cloud" that supports them.
The underlying concept of cloud computing dates back to the 1960s, when John McCarthy opined that "computation may someday be organised as a public utility." Almost all the modern-day characteristics of cloud computing (elastic provision, provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry and the use of public, private, government, and community forms, were thoroughly explored in Douglas Parkhill's 1966 book, The Challenge of the Computer Utility. Other scholars have shown that cloud computing's roots go all the way back to the 1950s when scientist Herb Grosch (the author of Grosch's law) postulated that the entire world would operate on dumb terminals powered by about 15 large data centers.
The actual term "cloud" borrows from telephony in that telecommunications companies, who until the 1990s offered primarily dedicated point-to-point data circuits, began offering Virtual Private Network (VPN) services with comparable quality of service but at a much lower cost. By switching traffic to balance utilisation as they saw fit, they were able to utilise their overall network bandwidth more effectively. The cloud symbol was used to denote the demarcation point between that which was the responsibility of the provider and that which was the responsibility of the user. Cloud computing extends this boundary to cover servers as well as the network infrastructure.
After the dot-com bubble, Amazon played a key role in the development of cloud computing by modernising their data centers, which, like most computer networks, were using as little as 10% of their capacity at any one time, just to leave room for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby small, fast-moving "two-pizza teams" could add new features faster and more easily, Amazon initiated a new product development effort to provide cloud computing to external customers, and launched Amazon Web Service (AWS) on a utility computing basis in 2006.
In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for deploying private clouds. In early 2008, OpenNebula, enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds, and for the federation of clouds. In the same year, efforts were focused on providing QoS guarantees (as required by real-time interactive applications) to cloud-based infrastructures, in the framework of the IRMOS European Commission-funded project, resulting to a real-time cloud environment. By mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among consumers of IT services, those who use IT services and those who sell them" and observed that "[o]rganisations are switching from company-owned hardware and software assets to per-use service-based models" so that the "projected shift to cloud computing ... will result in dramatic growth in IT products in some areas and significant reductions in other areas."

Layers

Once an internet protocol connection is established among several computers, it is possible to share services within any one of the following layers.


Cloud Computing Stack.svg

Client

A cloud client consists of computer hardware and/or computer software that relies on cloud computing for application delivery and that is in essence useless without it. Examples include some computers (example: Chromebooks), phones (example: Google Nexus series) and other devices, operating systems (example: Google Chrome OS), and browsers.

Application

Cloud application services or "Software as a Service (SaaS)" deliver software as a service over the Internet, eliminating the need to install and run the application on the customer's own computers and simplifying maintenance and support.
A cloud application is software provided as a service. It consists of the following: a package of interrelated tasks, the definition of these tasks, and the configuration files, which contain dynamic information about tasks at run-time. Cloud tasks provide compute, storage, communication and management capabilities. Tasks can be cloned into multiple virtual machines, and are accessible through application programmable interfaces (API). Cloud applications are a kind of utility computing that can scale out and in to match the workload demand. Cloud applications have a pricing model that is based on different compute and storage usage, and tenancy metrics.
What makes a cloud application different from other applications is its elasticity. Cloud applications have the ability to scale out and in. This can be achieved by cloning tasks in to multiple virtual machines at run-time to meet the changing work demand. Configuration Data is where dynamic aspects of cloud application are determined at run-time. There is no need to stop the running application or redeploy it in order to modify or change the information in this file.
SOA is an umbrella that describes any kind of service. A cloud application is a service. A cloud application meta-model is a SOA model that conforms to the SOA meta-model. This makes cloud applications SOA applications. However, SOA applications are not necessary cloud applications. A cloud application is a SOA application that runs under a specific environment, which is the cloud computing environment (platform). This environment is characterized by horizontal scalability, rapid provisioning, ease of access, and flexible prices. While SOA is a business model that addresses the business process management, cloud architecture addresses many technical details that are environment specific, which makes it more a technical model.

Platform

Cloud platform services, also known as platform as a service (PaaS), deliver a computing platform and/or solution stack as a service, often consuming cloud infrastructure and sustaining cloud applications. It facilitates deployment of applications without the cost and complexity of buying and managing the underlying hardware and software layers. Cloud computing is becoming a major change in our industry, and one of the most important parts of this change is the shift of cloud platforms. Platforms let developers write certain applications that can run in the cloud, or even use services provided by the cloud. There are different names being used for platforms which can include the on-demand platform, or Cloud 9. Regardless of the nomenclature, they all have great potential in developing, and when development teams create applications for the cloud, each must build its own cloud platform.

Infrastructure

Cloud infrastructure services, also known as "infrastructure as a service" (IaaS), deliver computer infrastructure – typically a platform virtualization environment – as a service, along with raw (block) storage and networking. Rather than purchasing servers, software, data-center space or network equipment, clients instead buy those resources as a fully outsourced service. Suppliers typically bill such services on a utility computing basis; the amount of resources consumed (and therefore the cost) will typically reflect the level of activity.

Server

The servers layer consists of computer hardware and/or computer software products that are specifically designed for the delivery of cloud services, including multi-core processors, cloud-specific operating systems and combined offerings.

Deployment models 



Cloud computing types

Public cloud

A public cloud is one based on the standard cloud computing model, in which a service provider makes resources, such as applications and storage, available to the general public over the Internet. Public cloud services may be free or offered on a pay-per-usage model.

Community cloud

Community cloud shares infrastructure between several organizations from a specific community with common concerns (security, compliance, jurisdiction, etc.), whether managed internally or by a third-party and hosted internally or externally. The costs are spread over fewer users than a public cloud (but more than a private cloud), so only some of the cost savings potential of cloud computing are realized.

Hybrid cloud

Hybrid cloud is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together, offering the benefits of multiple deployment models. It can also be defined as multiple cloud systems that are connected in a way that allows programs and data to be moved easily from one deployment system to another.

Private cloud

Private cloud is infrastructure operated solely for a single organization, whether managed internally or by a third-party and hosted internally or externally.
They have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".

Architecture


Cloud computing sample architecture 
Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud computing, typically involves multiple cloud components communicating with each other over a loose coupling mechanism such as a messaging queue.

The Intercloud

The Intercloud is an interconnected global "cloud of clouds" and an extension of the Internet "network of networks" on which it is based.

Cloud engineering

Cloud engineering is the application of engineering disciplines to cloud computing. It brings a systematic approach to the high level concerns of commercialization, standardization, and governance in conceiving, developing, operating and maintaining cloud computing systems. It is a multidisciplinary method encompassing contributions from diverse areas such as systems, software, web, performance, information, security, platform, risk, and quality engineering.

Issues

Privacy

The cloud model has been criticized by privacy advocates for the greater ease in which the companies hosting the cloud services control, thus, can monitor at will, lawfully or unlawfully, the communication and data stored between the user and the host company. Instances such as the secret NSA program, working with AT&T, and Verizon, which recorded over 10 million phone calls between American citizens, causes uncertainty among privacy advocates, and the greater powers it gives to telecommunication companies to monitor user activity. While there have been efforts (such as US-EU Safe Harbor) to "harmonize" the legal environment, providers such as Amazon still cater to major markets (typically the United States and the European Union) by deploying local infrastructure and allowing customers to select "availability zones." Cloud computing poses privacy concerns because the service provider at any point in time, may access the data that is on the cloud. They could accidentally or deliberately alter or even delete some info.

Compliance

In order to obtain compliance with regulations including FISMA, HIPAA, and SOX in the United States, the Data Protection Directive in the EU and the credit card industry's PCI DSS, users may have to adopt community or hybrid deployment modes that are typically more expensive and may offer restricted benefits. This is how Google is able to "manage and meet additional government policy requirements beyond FISMA" and Rackspace Cloud or QubeSpace are able to claim PCI compliance.[63]
Many providers also obtain SAS 70 Type II certification, but this has been criticized on the grounds that the hand-picked set of goals and standards determined by the auditor and the auditee are often not disclosed and can vary widely. Providers typically make this information available on request, under non-disclosure agreement.
Customers in the EU contracting with cloud providers established outside the EU/EEA have to adhere to the EU regulations on export of personal data.

Legal

As can be expected with any revolutionary change in the landscape of global computing, certain legal issues arise; everything from trademark infringement, security concerns to the sharing of propriety data resources.

Open source

Open-source software has provided the foundation for many cloud computing implementations, one prominent example being the Hadoop framework.[68] In November 2007, the Free Software Foundation released the Affero General Public License, a version of GPLv3 intended to close a perceived legal loophole associated with free software designed to be run over a network.

Open standards

Most cloud providers expose APIs that are typically well-documented (often under a Creative Commons license) but also unique to their implementation and thus not interoperable. Some vendors have adopted others' APIs and there are a number of open standards under development, with a view to delivering interoperability and portability.

Security

As cloud computing is achieving increased popularity, concerns are being voiced about the security issues introduced through adoption of this new model. The effectiveness and efficiency of traditional protection mechanisms are being reconsidered as the characteristics of this innovative deployment model differ widely from those of traditional architectures.
The relative security of cloud computing services is a contentious issue that may be delaying its adoption. Issues barring the adoption of cloud computing are due in large part to the private and public sectors' unease surrounding the external management of security-based services. It is the very nature of cloud computing-based services, private or public, that promote external management of provided services. This delivers great incentive to cloud computing service providers to prioritize building and maintaining strong management of secure services. Security issues have been categorized into sensitive data access, data segregation, privacy, bug exploitation, recovery, accountability, malicious insiders, management console security, account control, and mufti-tenancy issues. Solutions to various cloud security issues vary, from cryptography, particularly public key infrastructure (PKI), to use of multiple cloud providers, standardization of APIs, and improving virtual machine support and legal support.

Sustainability

Although cloud computing is often assumed to be a form of "green computing", there is as of yet no published study to substantiate this assumption. Siting the servers affects the environmental effects of cloud computing. In areas where climate favors natural cooling and renewable electricity is readily available, the environmental effects will be more moderate. (The same holds true for "traditional" data centers.) Thus countries with favorable conditions, such as Finland, Sweden and Switzerland, are trying to attract cloud computing data centers. Energy efficiency in cloud computing can result from energy-aware scheduling and server consolidation. However, in the case of distributed clouds over data centers with different source of energies including renewable source of energies, a small compromise on energy consumption reduction could result in high carbon footprint reduction.

Abuse

As with privately purchased hardware, crackers posing as legitimate customers can purchase the services of cloud computing for nefarious purposes. This includes password cracking and launching attacks using the purchased services. In 2009, a banking trojan illegally used the popular Amazon service as a command and control channel that issued software updates and malicious instructions to PCs that were

Virtualization

Virtualization, in computing, is the creation of a virtual (rather than actual) version of something, such as a hardware platform, operating system, a storage device or network resources.

Virtualization can be viewed as part of an overall trend in enterprise IT that includes autonomic computing, a scenario in which the IT environment will be able to manage itself based on perceived activity, and utility computing, in which computer processing power is seen as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while improving scalability and overall hardware-resource utilization

Contents

Types of virtualization

Hardware

Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like a real computer with an operating system. Software executed on these virtual machines is separated from the underlying hardware resources. For example, a computer that is running Microsoft Windows may host a virtual machine that looks like a computer with Ubuntu Linux operating system; Ubuntu-based software can be run on the virtual machine.
In hardware virtualization, the host machine is the actual machine on which the virtualization takes place, and the guest machine is the virtual machine. The words host and guest are used to distinguish the software that runs on the actual machine from the software that runs on the virtual machine. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or Virtual Machine Monitor.
Different types of hardware virtualization include:
  1. Full virtualization: Almost complete simulation of the actual hardware to allow software, which typically consists of a guest operating system, to run unmodified
  2. Partial virtualization: Some but not all of the target environment is simulated. Some guest programs, therefore, may need modifications to run in this virtual environment.
  3. Paravirtualization: A hardware environment is not simulated; however, the guest programs are executed in their own isolated domains, as if they are running on a separate system. Guest programs need to be specifically modified to run in this environment.
Hardware-assisted virtualization is a way of improving the efficiency of hardware virtualization. It involves employing specially-designed CPUs and hardware components that help improve the performance of a guest environment.
Hardware virtualization is not the same as hardware emulation: in hardware emulation, a piece of hardware imitates another, while in hardware virtualization, a hypervisor (a piece of software) imitates a particular piece of computer hardware or the whole computer altogether. Furthermore, a hypervisor is not the same as an emulator; both are computer programs that imitate hardware, but their domain of use in language differs.

Desktop

Desktop virtualization is the concept of separating the logical desktop from the physical machine.
One form of desktop virtualization, virtual desktop infrastructure (VDI), can be thought as a more advanced form of hardware virtualization: Instead of directly interacting with a host computer via a keyboard, mouse and monitor connected to it, the user interacts with the host computer over a network connection (such as a LAN, Wireless LAN or even the Internet) using another desktop computer or a mobile device. In addition, the host computer in this scenario becomes a server computer capable of hosting multiple virtual machines at the same time for multiple users.
Another form, session virtualization, allows multiple users to connect and log into a shared but powerful computer over the network and use it simultaneously. Each is given a desktop and a personal folder in which they store their files.
Thin clients, which are seen in desktop virtualization, are simple and/or cheap computers that are primarily designed to connect to the network; they may lack significant hard disk storage space, RAM or even processing power.
Using Desktop Virtualization allows your company to stay more flexible in an ever changing market. Having Virtual Desktops allows for development to be implemented quicker and more expertly. Proper testing can also be done without the need to disturb the end user. Moving your desktop environment to the cloud also allows for less single points of failure if you allow a third party to control your security and infrastructure. [4]

Software

Memory

  • Memory virtualization, aggregating RAM resources from networked systems into a single memory pool
  • Virtual memory, giving an application program the impression that it has contiguous working memory, isolating it from the underlying physical memory implementation

Storage

Data

  • Data virtualization, the presentation of data as an abstract layer, independent of underlying database systems, structures and storage
  • Database virtualization, the decoupling of the database layer, which lies between the storage and application layers within the application stack

Network

IT Service Management

IT service management (ITSM or IT services) is a discipline for managing information technology (IT) systems, philosophically centered on the customer's perspective of IT's contribution to the business. ITSM stands in deliberate contrast to technology-centered approaches to IT management and business interaction. The following represents a characteristic statement from the ITSM literature:

Providers of IT services can no longer afford to focus on technology and their internal organization[;] they now have to consider the quality of the services they provide and focus on the relationship with customers.

No one author, organization, or vendor owns the term "IT service management" and the origins of the phrase are unclear.

ITSM is process-focused and in this sense has ties and common interests with process improvement movement (e.g., TQM, Six Sigma, business process management, CMMI) frameworks and methodologies. The discipline is not concerned with the details of how to use a particular vendor's product, or necessarily with the technical details of the systems under management. Instead, it focuses upon providing a framework to structure IT-related activities and the interactions of IT technical personnel with business customers and users.

ITSM is generally concerned with the "back office" or operational concerns of information technology management (sometimes known as operations architecture), and not with technology development. For example, the process of writing computer software for sale, or designing a microprocessor would not be the focus of the discipline, but the computer systems used by marketing and business development staff in software and hardware companies would be. Many non-technology companies, such as those in the financial, retail, and travel industries, have significant information technology systems which are not exposed to customers.

In this respect, ITSM can be seen as analogous to an enterprise resource planning (ERP) discipline for IT – although its historical roots in IT operations may limit its applicability across other major IT activities, such as IT portfolio management and software engineering.

* 1 Frameworks
* 2 Professional organizations
* 3 Information Technology Infrastructure Library
* 4 Other frameworks and concern with the overhead
* 5 Governance and audit


IT Service Management is an enabler of information technology governance (or information management) objectives.

The concept of "service" in an IT sense has a distinct operational connotation, but it would be incorrect then to assume that IT Service Management is only about IT operations. However, it does not encompass all of IT practice, and this can be a controversial matter.

It does not typically include project management or program management concerns. In the UK for example, the IT Infrastructure Library (ITIL), a government-developed ITSM framework, is often paired with the PRojects IN Controlled Environments (PRINCE2) project methodology and Structured Systems Analysis and Design Method for systems development.

ITSM is related to the field of Management Information Systems (MIS) in scope. However, ITSM has a distinct practitioner point of view, and is more introspective (i.e. IT thinking about the delivery of IT to the business) as opposed to the more academic and outward facing connotation of MIS (IT thinking about the 'information' needs of the business).

IT Service Management in the broader sense overlaps with the disciplines of business service management and IT portfolio management, especially in the area of IT planning and financial control.

Frameworks

There are a variety of frameworks and authors contributing to the overall ITSM discipline. There are a variety of proprietary approaches available as well.

Professional organizations

There is an international, chapter-based professional association, the IT Service Management Forum (ITSMF), which has a semi-official relationship[weasel words] with ITIL and the ITSM audit standard ISO/IEC 20000. There is also a global professional association, the IT Service Management Professionals Association (IT-SMPa).

Information Technology Infrastructure Library
Main article: Information Technology Infrastructure Library

IT Service Management is often equated with the Information Technology Infrastructure Library, (ITIL) an official publication of the Office of Government Commerce in the United Kingdom. However, while a version of ITSM is a component of ITIL, ITIL also covers a number of related but distinct disciplines and the two are not synonymous.

The current version of the ITIL framework is the 2011 edition. The 2011 edition, published in July 2011, is a revision of the previous edition known as ITIL version 3 (published in June 2007).It was a major upgrade from version 2 (2001). Whereas version 2 was process orientated (split in 2 groupes: service support and service delivery), version 3 is service orientated. Since ITIL V3, the various ITIL processes are grouped into 5 stages of the service lifecycle: service strategy, service design, service transition, service operation and Continual service improvement (or CSI). The use of the term "Service Management" is interpreted by many in the world as ITSM, but again, there are other frameworks, and conversely, the entire ITIL library might be seen as IT Service Management in a larger sense.

Other frameworks and concern with the overhead

Analogous to debates in software engineering between agile and prescriptive methods, there is debate between lightweight versus heavyweight approaches to IT service management. Lighter weight ITSM approaches include:

* ITIL Small-scale Implementation[4] colloquially called “ITIL Lite” is an official part of the ITIL framework.
* FITS was developed for UK schools. It is a simplification of ITIL.
* Core Practice (CoPr or “copper”) calls for limiting Best Practice to areas where there is a business case for it, and in other areas just doing the minimum necessary.
* OpenSDLC.org A Creative Commons ITSM/SDLC Framework Wiki
* MOF 4 Microsoft Operations Framework covers the IT service management lifecycle with a practical focus

Governance and audit

Several benchmarks and assessment criteria have emerged that seek to measure the capability of an organization and the maturity of its approach to service management. Primarily, these alternatives provide a focus on compliance and measurement and therefore are more aligned with corporate governance than with IT service management per se.

* ISO/IEC 20000 (and its ancestor BS15000). This standard is not identical in taxonomy to ITIL and includes a number of additional requirements not detailed within ITIL and some differences. Adopting ITIL best practices is therefore a good first step for organizations wishing to achieve ISO 20000 certification for their IT Service Management processes.
* COBIT (or the lighter COBIT Quickstart) is comprehensive and widely embraced. It incorporates IT Service Management within its Control Objectives for Support and Delivery.