Blogs
-
Towards Tomorrow’s AI Networking: RDMA and IP over CXL Fabric and More (2024 June 18)
Hello everyone, Today, we will share our insights on how our company views AI networking and the advancements we are making with CXL technology. We have titled our presentation “RDMA and TCP/IP over CXL Fabric and More.” We will discuss the progress of the RDMA and TCP/IP protocols over CXL fabrics and their applications in high-performance GPU clusters and high-performance storage clusters in the field of artificial intelligence. -
NUPA: RDMA and TCP/IP over CXL and PCIe Fabric (2024 March 26)
In the past three years, with the advancement of Large Language Models (LLMs), the potential for leveraging extensive computational power towards achieving Artificial General Intelligence (AGI) has become increasingly apparent. However, the substantial increase in model parameters has posed significant challenges to network infrastructures, especially those supporting GPU and AI cluster facilities. Nvidia, as an industry leader, has leveraged its existing GPUs and InfiniBand (IB) networks, alongside the latest NVLink and NvSwitch technologies, to develop a comprehensive solution that covers… -
Forward Thinking on RDMA over CXL Protocol (2024 February 06)
With the rapid development of AI technology, the demand for AI networks has been increasing. In this context, Remote Direct Memory Access (RDMA) technology, which offers low latency and high throughput, has become increasingly important. However, the requirement for high-cost network interface cards (NICs) has limited the widespread adoption of traditional RDMA technology. To address this issue, this study combines the development characteristics of the future interconnect technology, Compute Express Link (CXL), with RDMA technology, resulting in the innovative RDMA… -
What is Memory? A Deep Thinking from CXL Expansion (2024 January 16)
This article explores the need for memory expansion in the context of cloud computing and AI. It discusses two key trends driving the demand for memory expansion, resource over-commitment in cloud computing and the increasing size of Large Language Models (LLMs). The article highlights the advantages of memory pooling and serial memory architectures like PCIe and High Bandwidth Memory (HBM) in addressing these needs. It introduces CXL technology as a solution that offers low latency, high bandwidth interconnect features, and…
Resources
- github: https://github.com/Clussys
- webpage: https://clussys.com
- contact: info@clussys.com