The Data Processing Unit

Поделиться
HTML-код
  • Опубликовано: 30 май 2019
  • Moore's Law has been weakening since the early 2000s, as it continues to break down, data centers will struggle more and more to keep up with the increasing data demands of the 21st century. We believe a new category of microprocessor is needed, we're calling it the Data Processing Unit, or DPU.
    To learn more about Fungible, visit: www.fungible.com
    Follow us on Twitter: / fungible_inc
    Follow us on LinkedIn: / fungible-inc.
    #datacenter #technology #cloud #network #data #datacentric #datacenterarchitecture #datacentricsw #softwarearchitecture #fungible_inc
    Today's data centers are responsible for powering billions of connected devices for individuals, enterprises and institutions worldwide. Massive amounts of data are generated and consumed by these devices at unprecedented rates in today's data-centric era. So, how will the next-generation data centers cope with this sky rocketing demand? To look forward, let us first look back at how data centers evolved. The original data center server architecture was not very different from a personal computer. At the heart of the server was the central processing unit, or CPU. Connected to the CPU were memory hard drives and a network interface controller, or NIC, which enables a connection to the network. Solid-state drives, or SSDs, were introduced when higher performance and more predictable access times were needed. In recent years, other elements such as graphics processing units, or GPUs, were added to the mix to run specialized computations tasks such as complex math functions far more quickly than a CPU ever could - this architecture is primarily compute-centric. In this architecture, the CPU has two roles to play: it plays its primary role of running applications, and at the same time, it plays the role of a data traffic controller - moving data between the network, the GPU, storage and others. This wasn't too much of a problem in the past, when the network and storage were slow, and when CPUs could spend milliseconds on a single task. Further, the CPUs were doubling in performance every generation, thus never in the critical path. These days, SSDs are one hundred times faster than regular hard drives, and networks are thousands of times faster - but new generations of CPUs are no longer keeping pace. The traffic controller role is now highly intense. Not only was the CPU not designed for this role, in fact this role distracts the CPU from doing the work it does do well. To enable more efficient data centers, ones that can truly address the needs of the future, a new architecture beyond compute-centric is needed. One which is more aptly called, data-centric. The new architecture should liberate all server resources from being stranded behind the CPU, giving them direct access to the network - allowing them to focus on the tasks they do best. To enable the server resources to move data efficiently to and from each other, we are introducing the concept of a new type of processor known as the data processing unit, or DPU. First, the DPU should take on the role of a super-charged data traffic controller, offloading the CPU from this I/O intensive task, but doing it orders of magnitude more efficiently than the CPU. Specifically, the DPU should be adept at sending and receiving packets from the network, encrypting and compressing the immense amount of data moving around these servers, and running firewalls to protect servers against abuse. Second, the DPU should enable heterogenous compute and storage resources distributed across servers, to be pooled to maximize utilization, and in doing so, reduce the total cost of ownership (TCO) of the compute and storage resources. We believe data engines such as the DPU will enable data centers to reach the efficiencies and speeds necessary to empower the radical innovations that will soon change the world.

Комментарии • 5