Technology Integration

The Technology Integration Group's Networking projects focus on data movement at all scales and levels of abstraction -- among processors within a supercomputer, among computers across continents, across diverse hardware transport mechanisms using a variety of protocol stacks.
CCI Logo

Common Communication Interface (CCI)

The CCI project is an open-source communication interface that aims to provide a simple and portable API, high performance, scalability for the largest deployments, and robustness in the presence of faults. It is developed and maintained by a partnership of research, academic, and industry members.

Targeted towards high performance computing (HPC) environments as well as large data centers, CCI can provide a common network abstraction layer (NAL) for persistent services as well as general interprocess communication. In HPC, MPI is the de facto standard for communication within a job. Persistent services such as distributed file systems, code coupling (e.g. a simulation sending output to an analysis application sending its output to a visualization process), health monitoring, debugging, and performance monitoring, however, exist outside of scheduler jobs or span multiple jobs. In these cases, these persistent services tend to use either BSD sockets for portability to avoid having to rewrite the applications for each new interconnect or they implement their own NAL which takes developer time and effort. CCI can simplify support for these persistent services by providing a common NAL which minimizes the maintenance and support for these services while providing improved performance (i.e. reduced latency and increased bandwidth) compared to Sockets.

We are currently supporting Sockets (UDP and TCP), OpenFabrics Verbs (InfiniBand and RoCE), Cray GNI (Gemini and Aries), Cray Portals 3.3 (SeaStar), D-Star (a native CCI implementation on SeaStar), as well as raw Ethernet (IP-bypass). Visit CCI Forum for release information.

Contributors: Scott Atchley

Status: On-going

  Top

An Adaptive End-to-End Approach for Terabit Data Movement Optimization

This is the work related to An Adaptive End-to-End Approach for Terabit Data Movement Optimization project, the goal of this project is to improve throughput of wide-area data transfer between DOE facilities. This work involves serveral areas:

Contributors: Scott Atchley, Youngjae Kim

Status: Completed

  Top

Staged Data Movement using Multiple Network Paths for Improved Throughput

This effort is working to develop tools that will improve bulk data movement of large scientific datasets between DOE's facilities and collaborating institutions. The initial tool will allow a user to specify a set of files, the source and destination institutions, as well as intermediate facilities that can assist in the transfer. The tool will then try to transfer some data directly from source to destination while simultaneously transferring data to the intermediate locations and from there to the destination. The user may see improved throughput by taking advantage of dedicated paths between the various locations. This is a form of overlay routing using common tools such as scp, bbcp, GridFTP, etc.

Contributors: Scott Atchley and Sudharshan Vazhkudai

Status: Completed

  Top