a18539cc5a Enterprise Networks | News
a18539cc5a Enterprise Networks | News

via Shutterstock

As the de facto standard Kubernetes cluster orchestrator moves deeper into enterprise IT infrastructure, version updates are coming a faster pace. The third release in 2019 comes with a grand total of 31 “enhancements” in various stages of production readiness ranging from “alpha” to “stable.”

Many of the latest upgrades target API management and Windows-based containers, allowing Windows workloads to be attached to existing cluster much in the same way as Linux nodes.

Kubernetes 1.16 released on Wednesday (Sept. 18) emphasizes the general availability of custom resource definitions (CRD) used to extend orchestrator capabilities by specifying storage and other resources. “The hard-won lessons of API evolution in Kubernetes have been integrated,” developers noted. “As we transition to [general availability], the focus is on data consistency for API clients.”

The addition of CRD and other API development tools “are enough to build stable APIs that evolve over time, the same way that native Kubernetes resources have changed without breaking backwards-compatibility,” they added.

Upstream Kubernetes contributors such as Google (NASDAQ: GOOGL) and Red Hat helped marshal the resource manager to production. CRDs “are a main extension point for building cloud native applications on Kubernetes,” the IBM (NYSE: IBM) unit said in a blog post. Red Hat has supported CRDs in recent releases of OpenShift and expects to integrate the latest Kubernetes enhancements into its container application platform.

The latest release also includes better metrics and the ability to adjust volumes in the Kubernetes container storage interface (CSI) introduced last year. The latter allows users to automatically create storage and make it available to application containers whenever they are scheduled for production. Storage can then be deleted when no longer needed.

The Kubernetes release team said volume resizing in support of CSI would move up to beta, allowing any CSI-specified volume plugin to be adjusted.

42f141d0a7 Enterprise Networks | News
42f141d0a7 Enterprise Networks | News

Source: Dell EMC

Dell Technologies rolled out redesigned servers this week based on AMD’s latest Epyc processor that are geared toward data-driven workloads running on increasingly popular multi-cloud platforms.

Dell, which has seen its lead shrink in the contracting global server market, is banking on AMD’s 7-nm Rome server processor introduced in August to provide the bandwidth and computational horsepower needed to scale enterprise cloud deployments. That, the server maker (NYSE: DELL) said, would allow “dynamic workload scaling” for HPC, data analytics and other emerging workloads.

The rollout announced Tuesday (Sept. 17) includes a pair of single-socket and three dual-socket PowerEdge servers based on AMD’s (NASDAQ: AMD) second-generation Epyc processor. Dell sought to avoid “a cookie cutter approach wherein we [would] presume that a particular server platform can meet the needs of every single workload, and that's where the differentiation comes in,” said Ravi Pendekanti, Dell’s senior vice president for server infrastructure products.

“It’s really about balancing the system design to leverage” the second-generation Rome processor, added David Schmidt, Dell’s product manager.

The company also claimed series of performance records based on the TPC benchmark for its one- and two-socket PowerEdge servers, including a 280-percent performance increase in virtual machine density for running databases.

With an eye on HPC going mainstream, Pendekanti said the high-end, two-socket model achieved 3,462 Gflops, representing a more than 200-percent performance upgrade based on the LINPACK benchmark. Those performance benchmarks grew out of earlier development at Dell EMC’s recently formed HPC and AI Innovation Lab.

Pendekanti also noted the server security features provided by AMD’s second-generation Rome processor, including encrypted virtualization and memory encryption. Hence, server security would extend “all the way from the firmware up through the higher levels of ecosystem,” he said.

What is happening in the state of the mission critical industry? In this ongoing feature, we ask industry leaders their thoughts on where the industry is headed and what will happen along the way. This issue we talk to Herb Villa, systems consultant data center solutions at Rittal, Carrie Goetz, D.MCO, a global IT executive, keynote speaker, consultant, and Jake Ring, CEO of GIGA Data Centers.

MC: The mission critical industry has been growing rapidly the last few years and is expected to continue strong for the foreseeable future. To what do you attribute that growth to and how long can it continue?

Villa: The growth of the industry I can sum up with two simple acronyms: IoT and IIoT, the Internet of Things and the Industrial Internet of Things where endusers and companies are developing new applications for merely optimizing what the internet is capable of doing, whether it’s social media, whether it’s controls, or whether it’s monitoring. It is the explosive growth of the IoT in the hospitality, health care, financial industries. The other thing we see is the shift away from the traditional enduser data systems that we built years ago. The demand is still there for the compute power, but these companies, whether they be Fortune 500 or small mom and pop shops, realize they are not in the business of IoT and that there are providers that can manage that IoT better for them.

Goetz: Data is driving much of the growth. Where IT used to be considered a necessary evil, today we know that companies use it to gain a great competitive advantage. Of course, the more data we have, the more we want, and the more we can do with it. Near-field communications and things like autonomous cars are going to increase the need for storage (temporary or more permanent).

2ea2305f3c Enterprise Networks | News
2ea2305f3c Enterprise Networks | News

Network bandwidth and overall performance remain concerns for digital enterprises as they roll out data analytics and edge computing efforts.

Nearly two-thirds of companies surveyed by management consultant Accenture (NYSE: CAN) said their enterprise networks are not up to the task of handling big data and Internet of Things deployments. A key reason is a “misalignment between IT and business needs” that is stymying those rollouts.

Only 43 percent of those companies polled said their networks are ready to support cloud and other digital technologies.

Network bottlenecks continue to grow as data volumes soar and companies seek to deploy analytics and other big data technologies to make sense of it all. Hence, the Accenture survey found that bandwidth demands were not being met while current network performance continues to fall short in terms of what users require. (Both were cited by 45 percent of respondents as ongoing issues.)

Accenture reported that most companies surveyed are deploying software-defined networks to address bandwidth and performance challenges. As they embrace a “unified enterprise network” approach, the survey said “it was clear that the majority continue to see their networks in pieces and parts.”

Most said they are already using IoT, edge computing and big data analytics technologies along with cloud-based customer and employee tools. While most are generally satisfied with network reliability and security, bandwidth and “overall capability” continue to fall short.

The disconnected over network performance was reflected in the diverging views of users and IT teams. CIOs and CTOs were generally satisfied with network performance; executives and hands-on users were not. Nevertheless, IT specialists were relatively sanguine about the ability to boost network bandwidth over the next 18 to 24 months.

Among the reasons are emerging technologies like NVM Express over storage network fabrics and a vibrant developer community coalescing around emerging technologies like service meshes.