The world of data center is very unkind, every day the IT managers or network administrators have to deal with a new challenge. One crucial challenge is when the mechanical devices start creating issues and their performance is not up to expectation. To avoid such problems, network-planning managers comply with manufacturers recommendation on equipment lifecycle changes of every 3 to 5 year, although these equipment are build to last and can provide services for 25-40 years.
Complying with manufacturer’s lifecycle plans increase the cost of maintenance of IT networks, usually the announcement of products end of life and end of service by forcing companies to upgrade to new equipment is a strategy to keep their business afloat. As a result, it becomes important that CTOs and IT managers take the ownership of their products lifecycle and research about ways and strategies to maintain assets and look for alternatives for expensive equipment.
It would prove really helpful if you could predict the average lifespan of a product. Fortunately, MTBF and MTTF can help you know about the number of hours a component; system or assembly will operate before it fails.
Mean Time Between Failure (MTBF) could be defined as the term used to provide the amount of failures per million hours for a product. This is the most popular KPI about a product’s life span and is essential in the decision making process of the end user when deploying equipment into a very critical environment. Moreover, MTBF may be a probable reference point in a Request For Quote (RFQ). MTTF (Mean Time To Failure) is an essential measure of reliability for non-repairable systems; it’s the meantime expected until the first failure of a piece of equipment.
Addressing the key part of aligning maintenance tasks to failure, failure can be categorized into three categories:
Induced: Induced failures can be defined as failure due to an outside force. It is crucial to understand that induced failure must be recognized and analyzed to find out the root cause.
Intermittent: These types of failures can happen any time. The implication of these types of so-called ‘random’ failures is that the MTBF can’t be determined. Detecting these types of failures is possible through process and Preventive Maintenance (PM) monitoring. In case of failure to effectively determine the onset of failure, network administrators either chose to increase PM frequencies or schedule new procedures in order to decrease these failures.
Wear out: these types of failures have a known MTBF and they happen when the useful life of a system is increased. In general, these can be detected through process and PM monitoring. But, time-based refurbishment generally proves to be the best maintenance strategy.
When replacement becomes important, the planning office can build a strategy based on the evaluation and adoption of refurbished equipment to keep the costs low without compromising on the brand loyalty or network performance. Statistically, refurbished products have the same MTBF as new equipment, because during the refurbishing process and high-level tests are passed before the equipment are shipped out in the market for sale. Compared to new equipment, Refurbished products come with a same or longer warranty, they have a quicker availability and affordable advance replacement.
We welcome the discussion with your network planning office or CTO towards implementing maintenance strategies that improve the return on investment on your installed network equipment portfolio.