Enterprises today want to improve their competitive positions through ever faster and more automated technology. Breakthroughs at the silicon level are key elements in making this possible. What enterprises need is systems that can handle today’s complex workloads and data sets (so-called big data) and utilize advanced methodologies such as real-time analytics, machine learning, and artificial intelligence. Such systems rely on huge memory pools and extremely high-speed storage that can include in-situ processing. Technologies such as 3D NAND, QLC, and 3D XPoint can provide the necessary high density, low cost, and extremely fast access times. They will lead to SSDs that include features and functionality specifically designed to meet workload demands, regardless of whether the environment is read-intensive, write-intensive, or mixed use. They can also support the protocols and fabrics that enable storage to be shared across multiple applications. Furthermore, they provide a logical basis for a software layer designed specifically for flash memory. It will lead to new levels of programming and management flexibility as well as to design simplicity and throughput levels far beyond what the industry has achieved so far.
Today, 90% of IP traffic is inside the data center, and it is growing by 20% annually. Cores and threads per server have increased from one to hundreds (and thousands per rack). Distributed databases and machine learning will require even more interconnectivity. How do we design the fabrics and engineer the traffic to make this all work efficiently at reasonable cost with tremendous scalability? The solution has several parts:
. Software-defined everything to enable service chains, automation, and ultimately self-driving fabrics
. Micro segmentation of fabric performance, security, and visibility services that scale out indefinitely with each server
. Fabrics, which are container-aware and can use virtual networks to deliver unique services to each container microservice
. Retaining Ethernet as the core technology while converging new capabilities onto the network
An application running on one core then can interconnect via the fabric with hundreds or thousands of other cores inside a rack. NVMe will play a key role in providing standardized access to the higher throughput of PCIe. The result will be a fully adaptable fabric-based system well-suited to the needs of clouds and hyperscale websites and to the requirements of real-time analytics, personalized nodes in massive social networks, and the Internet-of-Things
Hyperscale data centers such as clouds, social networks, and large e-commerce sites, are using ever-larger amounts of flash memory to achieve the required throughput and latency at minimal cost. How can such large amounts of flash storage be networked efficiently? High-speed interfaces to eliminate server bottlenecks are obviously a basic element of the solution. However, they must be sized correctly to match application needs and not overload the network core or break the budget. With NVMe SSDs able to produce over 20Gb/s throughput, the server bus architecture must match PCIe bus and memory bus capabilities to CPU, NIC, and flash requirements. The network topology must also be adjusted to match traffic patterns and application performance requirements, including such features as 100GbE switch to switch connections and larger east to west data paths as needed. Software must be upgraded, for example deploying new storage stacks that take full advantage of SSDs and persistent memory via such emerging technologies as NVMe over Fabrics. Scalability and flexibility are key factors as well, since data will keep increasing in size and new technology breakthroughs will continue to occur (for example, 200 and 400GbE and persistent memory-based storage at near memory speeds).
The convergence of advanced NVM technology, software, and architecture has created new performance levels for storage, resulting in opportunities that simply weren’t available before. It is an exciting time in storage technology as these new developments transform the ability to gather, process, and store information. Learn how breakthrough technologies, available today, greatly reduce latency and increase throughput with advanced architectures, instructions, software, and solid state media. These innovative technologies are the foundation for products that deliver business and customer value through improved performance, capacity, manageability, and reliability.
Too much good stuff! The amount of critical real-time data keeps growing at a tremendous rate with zettabytes (billions of terabytes) coming in the near future. New types of digital storage are essential to handle the flood. They must offer huge capacity and big-time scalability, as well as be suitable for enterprise edge storage, endpoints, enterprise data and control centers, and billions of users everywhere. Today’s storage technologies cannot do the job, nor can media manufacturers alone meet enormous performance and capacity challenges. Storage manufacturers must develop innovative technologies and media-independent solutions. They must create more powerful controllers, more flexible form factors, and higher-density structures. They must make a major contribution to ensuring that enterprise can take full advantage of growing data resources.