HOME

TheInfoList



OR:

Capacity management's goal is to ensure that information technology resources are sufficient to meet upcoming business requirements cost-effectively. One common interpretation of capacity management is described in the
ITIL The Information Technology Infrastructure Library (ITIL) is a set of detailed practices for IT activities such as IT service management (ITSM) and IT asset management (ITAM) that focus on aligning IT services with the needs of business. ITIL de ...
framework. ITIL version 3 views capacity management as comprising three sub-processes: business capacity management, service capacity management, and component capacity management. As the usage of IT services change and functionality evolves, the amount of
central processing units A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, an ...
(CPUs), memory and storage to a physical or virtual server etc. also changes. If there are spikes in, for example, processing power at a particular time of the day, it proposes analyzing what is happening at that time and making changes to maximize the existing IT infrastructure; for example, tuning the application, or moving a batch cycle to a quieter period. This
capacity planning Capacity planning is the process of determining the production capacity needed by an organization to meet changing demands for its products. In the context of capacity planning, design capacity is the maximum amount of work that an organization ...
identifies any potential capacity related issues likely to arise, and justifies any necessary investment decisions - for example, the server requirements to accommodate future IT resource demand, or a
data center A data center (American English) or data centre (British English)See spelling differences. is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommuni ...
consolidation. These activities are intended to optimize performance and efficiency, and to plan for and justify financial investments. Capacity management is concerned with: * Monitoring the performance and throughput or load on a server, server farm, or property * Performance analysis of measurement data, including analysis of the impact of new releases on capacity *
Performance tuning Performance tuning is the improvement of system performance. Typically in computer systems, the motivation for such activity is called a performance problem, which can be either real or anticipated. Most systems will respond to increased load wit ...
of activities to ensure the most efficient use of existing infrastructure * Understanding the demands on the service and future plans for workload growth (or shrinkage) * Influences on demand for computing resources *
Capacity planning Capacity planning is the process of determining the production capacity needed by an organization to meet changing demands for its products. In the context of capacity planning, design capacity is the maximum amount of work that an organization ...
of storage, computer hardware, software and connection infrastructure resources required over some future period of time. Capacity management interacts with the discipline of Performance Engineering, both during the requirements and design activities of building a system, and when using performance monitoring.


Factors affecting network performance

Not all networks are the same. As data is broken into component parts (often known frames, packets, or segments) for transmission, several factors can affect their delivery. * Delay: It can take a long time for a packet to be delivered across intervening networks. In reliable protocols where a receiver acknowledges delivery of each chunk of data, it is possible to measure this as round-trip time. * Jitter: This is the variability of delay. Low jitter is desirable, as it ensures a steady stream of packets being delivered. If this varies above 200ms, buffers may get starved and not have data to process. * Reception Order: Some real-time protocols like voice and video require packets to arrive in the correct sequence order to be processed. If packets arrive out-of-order or out-of-sequence, they may have to be dropped because they cannot be inserted into the stream that has already been played. *
Packet loss Packet loss occurs when one or more packets of data travelling across a computer network fail to reach their destination. Packet loss is either caused by errors in data transmission, typically across wireless networks, or network congestion.Ku ...
: In some cases, intermediate devices in a network will lose packets. This may be due to errors, to overloading of the intermediate network, or to the intentional discarding of traffic in order to enforce a particular service level. * Retransmission: When packets are lost in a reliable network, they are retransmitted. This incurs two delays: First, the delay from re-sending the data; and second, the delay resulting from waiting until the data is received in the correct order before forwarding it up the protocol stack. * Throughput: The amount of traffic a network can carry is measured as throughput, usually in terms such as kilobits per second. Throughput is analogous to the number of lanes on a highway, whereas latency is analogous to its speed limit. These factors, and others (such as the performance of the network signaling on the end nodes, compression, encryption, concurrency, and so on) all affect the effective performance of a network. In some cases, the network may not work at all; in others, it may be slow or unusable. And because applications run over these networks, application performance suffers. Various intelligent solutions are available to ensure that traffic over the network is effectively managed to optimize performance for all users. See Traffic Shaping


The performance management discipline

Network performance management (NPM) consists of measuring, modeling, planning, and optimizing networks to ensure that they carry traffic with the speed, reliability, and capacity that is appropriate for the nature of the application and the cost constraints of the organization. Different applications warrant different blends of capacity, latency, and reliability. For example: *Streaming video or voice can be unreliable (brief moments of static) but needs to have very low latency so that lags don't occur *Bulk file transfer or e-mail must be reliable and have high capacity, but doesn't need to be instantaneous *Instant messaging doesn't consume much bandwidth, but should be fast and reliable


Network performance management tasks and classes of tools

Network Performance management is a core component of the
FCAPS FCAPS is the ISO Telecommunications Management Network model and framework for network management. ''FCAPS'' is an acronym for fault, configuration, accounting, performance, security, the management categories into which the ISO model defines ne ...
ISO telecommunications framework (the 'P' stands for Performance in this acronym). It enables the network engineers to proactively prepare for degradations in their IT infrastructure and ultimately help the end-user experience. Network managers perform many tasks; these include performance measurement, forensic analysis, capacity planning, and load-testing or load generation. They also work closely with application developers and IT departments who rely on them to deliver underlying network services. *For ''performance measurement'', operators typically measure the performance of their networks at different levels. They either use per-port metrics (how much traffic on port 80 flowed between a client and a server and how long did it take) or they rely on end-user metrics (how fast did the login page load for Bob.) **Per-port metrics are collected using flow-based monitoring and protocols such as
NetFlow NetFlow is a feature that was introduced on Cisco routers around 1996 that provides the ability to collect IP network traffic as it enters or exits an interface. By analyzing the data provided by NetFlow, a network administrator can determine thin ...
(now standardized as
IPFIX Internet Protocol Flow Information Export (IPFIX) is an IETF protocol, as well as the name of the IETF working group defining the protocol. It was created based on the need for a common, universal standard of export for Internet Protocol flow infor ...
) or
RMON The Remote Network Monitoring (RMON) MIB was developed by the IETF to support monitoring and protocol analysis of LANs. The original version (sometimes referred to as RMON1) focused on OSI layer 1 and layer 2 information in Ethernet and Token ...
. **End-user metrics are collected through web logs, synthetic monitoring, or real user monitoring. An example is ART (application response time) which provides end to end statistics that measure Quality of Experience. *For ''forensic analysis'', operators often rely on sniffers that break down the transactions by their protocols and can locate problems such as retransmissions or protocol negotiations. *For ''capacity planning'', modeling tools such as Aria Networks,
OPNET OPNET Technologies, Inc. was a software business that provided performance management for computer networks and applications. The company was founded in 1986 and went public in 2000. In October 2012, OPNET was acquired by Riverbed Technology, fo ...
, PacketTrap, NetSim,
NetFlow NetFlow is a feature that was introduced on Cisco routers around 1996 that provides the ability to collect IP network traffic as it enters or exits an interface. By analyzing the data provided by NetFlow, a network administrator can determine thin ...
and sFlow Analyzer, or NetQoS that project the impact of new applications or increased usage are invaluable. According to Gartner, through 2018 more than 30% of enterprises will use capacity management tools for their critical IT infrastructures, up from less than 5% in 2014. These capacity management tools help infrastructure and ''operations management'' teams plan and optimize IT infrastructures and tools, and balance the use of external and ''cloud computing'' service providers. *For ''load generation'' that helps to understand the breaking point, operators may use software or appliances that generate scripted traffic. Some hosted service providers also offer pay-as-you-go traffic generation for sites that face the public Internet.


Next generation NPM tools

Next-generation NPM tools are those that improve network management by automating the collection of network data, including capacity issues, and automatically interpreting it. Terry Slattery, editor at NoJitter.com, compares three such tools, VMWare's vRealize Network Insight, PathSolutions TotalView, and Kemp Flowmon, in the article ''The Future of Network Performance Management'', June 10, 2021.


The future of NPM

The future of network management is a radically expanding area of development, according to Terry Slattery on June 10, 2021: "We're starting to see more analytics of network data at levels that weren’t possible 10-15 years ago, due to limitations that no longer exist in computing, memory, storage, and algorithms. New approaches to network management promise to help us detect and resolve network problems... It’s certainly an interesting and evolving field."


See also

*
Application performance management In the fields of information technology and systems management, application performance management (APM) is the monitoring and management of the performance and availability of software applications. APM strives to detect and diagnose complex app ...
*
Capacity planning Capacity planning is the process of determining the production capacity needed by an organization to meet changing demands for its products. In the context of capacity planning, design capacity is the maximum amount of work that an organization ...
*
IT operations analytics In the fields of information technology (IT) and systems management, IT operations analytics (ITOA) is an approach or method to retrieve, analyze, and report data for IT operations. ITOA may apply big data analytics to large datasets to produce bus ...
*
ITIL The Information Technology Infrastructure Library (ITIL) is a set of detailed practices for IT activities such as IT service management (ITSM) and IT asset management (ITAM) that focus on aligning IT services with the needs of business. ITIL de ...
*
Network monitoring Network monitoring is the use of a system that constantly monitors a computer network for slow or failing components and that notifies the network administrator (via email, SMS or other alarms) in case of outages or other trouble. Network monito ...
*
Network planning and design Network planning and design is an iterative process, encompassing topological design, network-synthesis, and network-realization, and is aimed at ensuring that a new telecommunications network or service meets the needs of the subscriber and op ...
* Performance analysis *
Performance tuning Performance tuning is the improvement of system performance. Typically in computer systems, the motivation for such activity is called a performance problem, which can be either real or anticipated. Most systems will respond to increased load wit ...


References

{{DEFAULTSORT:Capacity Management Network management Software performance management