LEESBURG, Va. — CAPRE’s seventh annual Mid-Atlantic Data Center Summit was a major milestone for the conference firm’s International Data Center Series. The event ushered in a new era of multiday summits, encompassing fresh and exciting content, such as the inaugural Women of Mission Critical Summit that took place the day before. Welcoming 400-plus attendees over both days, the summit also featured the recognition of CAPRE’s contribution to the mission critical space by way of the 2019 Platinum Heart Award, presented by Leadership Logic Consulting and the Allied Testing & Commissioning Council.

The Platinum Heart Award showcases outstanding delivery of innovative national conferences to data center and mission critical operations and is given in recognition of individuals and organizations whose contributions, innovations, and passion positively impact and support the information technology and mission critical industries.

CAPRE joins an esteemed list of Platinum Heart Award recipients, including the 7X24 Exchange Intl., the Greater Phoenix Economic Foundation, and The Arizona Technology Council. Although the Platinum Heart Award is the highest honor awarded by Leadership Logic Consulting and the Allied Testing & Commissioning Council, some notable Gold Heart Awardees from the past several years include Mission Critical magazine, Critical Facilities Summit, and Atom Power.

“Each year, Leadership Logic Consulting seeks out and accepts nominations for these awards to recognize outstanding organizations that positively impact the data center/mission critical industry," said Tim Oergel, founder, chief skills and learning officer, and senior corporate trainer at Leadership Logic Consulting. "I am delighted to see CAPRE’s stellar performance in education, and how all of these organizations are making a difference for the future of our industry.”

“This has been a great ride for the past 10 years with this data center series," said Brian Klebash, CEO & founder of CAPRE. "Today we travel to the largest markets from city to city, country to country, but we started this back in New York City in 2011.

Oracle is expecting to open a new data center region every 23 days on average over the next 15 months in a bold investment strategy to create 20 additional facilities across the globe.

The move comes as hyper-scale cloud operators such as Amazon, Google and Microsoft are continuing to invest billions in data centers this year following a record $120 billion in capex spending last year.

The Redwood City, Calif.-based software and cloud giant unveiled plans at OpenWorld this week to launch 20 new Oracle Cloud sites by the end of 2020, including 17 commercial and three government centers.

Looking at the cloud provider data center market, Microsoft Azure is currently available in 54 regions, followed by Amazon Web Services with 22 regions and Google Cloud Platform at 20 regions. Oracle Cloud plans to have a total 36 regions available by the end of 2020.

[Related: 5 Ways Cisco, Dell, Lenovo, HP And HPE Are Incenting Partners Around As-A-Service]

Oracle Cloud will build new data centers in California, Chile, Montreal, Melbourne, Amsterdam, Singapore, Israel, South Africa, Belo Horizonte in Brazil, Osaka in Japan, Hyderabad in India, Chuncheon in South Korea, Newport in Wales, as well as two in Saudi Arabia and two in the United Arab Emirates. The company also intends to open two regions for the U.K. government and one for the government of Israel.

The top five worldwide hyper-scale data center spenders in the second quarter of 2019 were Amazon, Apple, Google, Facebook and Microsoft. Other leading hyper-scale spenders include Oracle, Alibaba, IBM, Tencent and Baidu. Capex spending in building, expanding and equipping huge data centers hit $28 billion in the second quarter of 2019, down 2 percent year over year. The 2 percent drop was mainly due to hyper-scale capex decline in China, where the region was down a whopping 37 percent year over year in data center spending.

Not too long ago, an article on BitDefender caught my eye. Titled “California’s ban on weak default passwords isn’t going to fix IoT security,” it explained how default passwords are a problem with the Internet of Things (IoT), but they’re not the problem. In fact, author Graham Cluley went so far as to say, “It also won’t address other problems such as IoT devices with weak or non-existent encryption, or internet-enabled technology which has no updating infrastructure if a vulnerability is found in the future.”

Cluley mentioned the Mirai botnet attack on Dyn’s DNS service and how default passwords are at least partly to blame for the ease with which the Mirai code took control of an army of IoT devices.

Now it’s well known (but not talked about enough) that DNS has its own problems to solve (https://bit.ly/2Q8lt9I). But that’s a different problem to solve and therefore a discussion for another day. The Mirai attack wasn’t about DNS’s weaknesses, it was about IoT’s utter lack of security.

My colleague, Thomas LaRock, had a slightly different take on California’s ban. Commenting on the BitDefender article, he said, “It’s a start, but not enough. We need oversight on how these devices are made, specifically how they can be patched as needed.”

So, this is where I throw my internet-connected hat into the ring and tell you exactly how this is going to get fixed.

In a nutshell, it’s on you, the IoT owner.

IoT security is not going to be solved by IoT vendors (or at least, not spontaneously out of the goodness of their heart). The fact is, those vendors are changing far too rapidly — winking in and out of business like fireflies on a hot summer night. They use bargain basement components that change from revision to revision (and sometimes in between).

Restoring power to data centers can result in costly maintence and downtime for service providers. In 2016, the national average cost of downtime was almost $9000 per minute. Often, this power failure can be attributed to preventable hardware failures. In 2016, the Ponemon Institute conducted a study to quantify the cost of downtime and identify the most common causes of data center outages. According to the study, the most common reasons for unplanned outages are UPS system failure, cybercrime, accidental human error, water or heat failure, weather-related disasters, generator failure, and IT equipment failure. The least common reason for unplanned outages was IT equipment failure, coming at 4%. However, when the study examined the total cost of each cause of failure, IT equipment failure ranked the most expensive. Hardware failures can stem from many places, including cooling fans, hard disk drives, and busbars.

The Challenge

An integrated cloud technology company noticed that the backup servers in its data centers were being used more than normal. After a little digging, the team realized that their primary servers were experiencing power failures, which caused the system to rely on the backup servers. Upon inspection, they discovered that the busbar and crown clip connection for the primary server had corrosion buildup. This buildup was attributed to fretting corrosion, or micro-motions that wear contacts and expose fresh layers of metal to oxidation. This eventually created an open connection and, ultimately, power failure. The provider determined that the micromotion occurred during both shipping and regular operation. The data center provider needed a solution that would protect future manufactured connectors from fretting corrosion and restore reliable connectivity to damaged connectors in the field.

Sometimes, unplugging and replugging connectors is enough to solve intermittent power failures. However, unless a dielectric lubricant is applied to the connector, the connector will continue to oxidize and corrode.

In my last column, “Market Insanity,” I wrote about some of the challenges the industry is undergoing in the hyperscale space. Challenges include shortage of resources in both design/construction as well as in the operations sector.

With the exciting opportunities that seem to blossom every day, one of the latest sprints is speed-to-market in APAC (Asia-Pacific). This creates challenges on multiple levels.

Develop the APAC prototype

If you plan to build in Japan, Singapore, or China, you had better plan to go to vertical. Land cost is extremely high ($20M US + per hector), and a well-thought-out multi-story data center prototype is the preferred method of construction. The prototype will also need to be aesthetically pleasing to get through the local authorities while meeting a variety of local codes.

The building layout should also consider the lowest amount of conduit runs and be laid out in a functional manner. The renderings enclosed layout a typical floor plan including generators, transformers, electrical equipment, and floor layouts. Typically, on a one-hector site, you can fit 5,000-sq-meter floorplates equaling 2,500 sq meters of computer hardware.

Design philosophy for APAC

While direct-evaporative mechanical systems are preferred within the hyperscale industry, this cannot be achieved in a vertical prototype. In most cases, a large centrical mechanical system is preferred due to limited roof space.

In theory, the preferred mechanical design would be N+1 on an individual floor basis. However, the sizing of numerous chiller plants per floor will also not comply due to quantity. Therefore, increasing the chiller sizes (900 to 1,000 ton) will reduce the equipment on the roof, making it mechanically achievable. Electrically, the design can be achieved in both a catcher-block design as well as a distributed redundant design. Generators should be designed in 3MW units in an N+1 block configuration.