Setting the Bar for Data Center Infrastructure

How did Uptime Institute's 20-year-old Tier certification become the industry standard for building data centers? Senior Vice President of Marketing Mark Harris explains the steps and shares his expectations for the future of the data center business.

Mark Harris, senior vice president of marketing, The Uptime Institute. Photo courtesy of The Uptime Institute

Mark Harris, Senior Vice President of Marketing, Uptime Institute. Image courtesy of Uptime Institute

Investment into data centers is not showing any signs of slowing down anytime soon, as more and more entities are attracted to the asset class, both from the domestic and the foreign side of capital. The result is unprecedented demand for data center space from investors or companies that wish to establish their own disaster recovery facilities, among a myriad of other players.

Data centers are almost always purpose-built, with costs far exceeding those of typical commercial office buildings, while also requiring distinct conditions be met, such as the availability of multiple fiber networks and power feeds, and locations far from crowded areas.

The Uptime Institute offers data center developers and operators its Tier certification system, which, broadly speaking, determines where and how a specific data center could and should be built. Commercial Property Executive reached out to Mark Harris, senior vice president of marketing at Uptime Institute, who draws from his 25 years of experience in the IT world to shed light on the process and its value.

Uptime Institute’s Tier certification process is more than 20 years old, having become the industry standard when building data centers. Tell us about how this process evolved over time.

Harris: The Tier standard for topology is results-oriented, so it accommodates any number of approaches to realize the results desired. Whereas most other standards are prescriptive on the technologies needed or configurations required, the Tier Standard is focused on designing data centers that exhibit a desired result, regardless of how that result is achieved.

Over the years, we have observed thousands of data centers and understand the behaviors to be expected when various approaches are used. Over time, we have more primary observational data to draw from when we offer our guidance. That is the power of the Tier standard—always focused on realizing desired results. In a nutshell, we have seen so many sites, we know exactly how various deployments and configurations will behave over a wide range of operating conditions.

How do data center providers choose which Tier to aim for?

Harris: It’s based on their business requirements. The common mistake people make is considering one-size-fits-all when it comes to data centers. The higher the Tier rating, the higher the cost is to deliver that level of resiliency. Hence, there is no single Tier that fits all business needs. Each application has a specific value to the business so the cost model to deliver that application varies. The most effective approach to running IT as a business is to match the various applications to the suitable Tier-rated platform in a cost structure that can be justified.

As a general rule, applications that are focused on internal audiences/constituents—e-mail, for example—can withstand more unplanned downtime. Applications that are outwardly focused—such as e-commerce—must remain online always. Hence, the investments required to deliver these various applications can be tuned for the desired results. Many customers match their applications to their available data center resources, each site with defined resiliency of Tier II, III or IV levels.

What are some of the most common challenges that arise in the Tier certification process, both before and after the deployment of a new facility?

Harris: Tier certification is the only mechanism available to assure that the well-intended designers and construction process have created a data center that will perform as expected. Owners of sites that desire Tier certification should be able to provide all the documentation and narrative in design approaches during our discovery processes. These discussions can be lengthy because we are looking for expected behaviors and establishing and understanding of the designers’ reasons for making the various choices they have made. Each part of the design will be studied in great detail, so the documentation and the designers themselves must be available during this discovery process.

Our experience in thousands of sites over two dozen years will be used to help our customers understand what to expect of their designs and we will iteratively review their designs as changes are made upon request. The process is specific and directed, with the key being results. As one of the last steps in certification, we will simulate extreme conditions—including full utility power failure—to demonstrate the design’s resiliency.

Companies are starting to implement space-efficient designs. We are starting to see more micro/modular deployments at the edge, as well as multi-storied facilities, for example. How do you expect these trends to impact the data center certification process?

Harris: Data center Tier certification becomes even more essential as the absolute capacity and overall density of a data center increases. Whereas a single site may have had 1,000 dual-core servers in years past, this same footprint may now house ten times the number of cores in the same space.

Failure in this high-density data center could potentially affect ten times the number of transactions, customers or other units of work. Failure of any type in a high-density environment may no longer be acceptable, so Tier certification becomes increasingly more important as space-efficiency increases. And as these sites become replicated across a campus, the availability to do work must be extremely well understood for each portion of the computing solution, each data center site, each campus etc.

An increasing number of data centers are becoming outdated by today’s standards. How is this older inventory used and managed in 2019?

Harris: A technology refresh cycle has always been highly desirable but due to the complexity to do so, has rarely been mandated. In the IT industry, that average technology refresh cycle is about three-four years, and this is a balance between value, burdened cost and overall performance. Devices that exceed the refresh cycle consume more space and power, cost more to maintain and lose their depreciation savings. Now that virtualization, dynamic workload migration and software-defined approaches are commonplace, it is less problematic to change underlying physical devices. The industry’s people, processes and procedures simply need to be recalibrated to take this proactive approach on a regular basis. Staying on the performance curve is fiscally responsible.

Research that emerged in 2017 suggests that, by 2025, data centers will use about 20 percent of all the world’s electricity and be responsible for roughly 5.5 percent of all carbon emissions. What can third-party validation companies such as the Uptime Institute do to reduce this environmental impact?

Harris: We have heard sensational claims like this for years and while the numbers may not be quite so dramatic as stated, more power and carbon will be in play in 2025 than there is today. All companies that own and operate digital infrastructure should challenge themselves to seek out guidance on changing their business processes to allow for innovation at the infrastructure level. As mentioned previously, the technology refresh opportunity has been well understood for more than 20 years, but the accountability to capitalize on that opportunity has been nonexistent.

Today, most companies gravitate towards taking actions that can make money or save money. Today, it can be demonstrated that thinking about efficiency, creating new applications that can be hosted on modern platforms and refreshing technologies on a regular basis can do both. Taking these actions is no longer just socially responsible, but it affects the bottom line, which drives everything else we do.

What does the future hold for data center infrastructure?

Harris: First, while it is true that a wide range of technologies are being introduced each year, these innovations will solve a smaller number of problems over time, and the successful solutions will mature at a much faster rate than previously seen. The ability to compress capacity into smaller amounts of space will give raise to huge efficiencies. Intelligence at the foundational level, with AI and ML, will augment the operational characteristics of data centers, enabling changes to power distribution and cooling at a much faster rate and at a much more granular level. Digital infrastructure will also expand outward from the dense cores, with highly capable edge hubs, which can take advantage of high-speed and high-capacity 5G and beyond.

The second part of the answer is all about business. All these latest generation, high-density computing sites will become even more essential in delivering IT services for business. While downtime of any single service will become rare, the essential nature of each site will become apparent as overall capacity is totaled— each site will form a critical gear in the machine and while the machine will operate with any single site, the ability to meet the business needs of its constituents will be diminished. The biggest change in data centers comes from its tight new alignment with the business, rather than from any technology innovation.

You May Also Like