High I/O flow rates, up to 10 GB/s, are required in large fusion Tokamak experiments like ITER where hundreds of nodes store simultaneously large amounts of data acquired during the plasma discharges. Typical network topologies such as linear arrays (systolic), rings, meshes (2-D arrays), tori (3-D arrays), trees, butterfly, hypercube in combination with high speed data transports like Infiniband or 10G-Ethernet, are the main areas in which the effort to overcome the so-called parallel I/O bottlenecks is most focused. The high I/O flow rates were modelled in an emulated testbed based on the parallel file systems such as Lustre and GPFS, commonly used in High Performance Computing. The test runs on High Performance Computing-For Fusion (8640 cores) and ENEA CRESCO (3392 cores) supercomputers. Message Passing Interface based applications were developed to emulate parallel I/O on Lustre and GPFS using data archival and access solutions like MDSPLUS and Universal Access Layer. These methods of data storage organization are widely diffused in nuclear fusion experiments and are being developed within the EFDA Integrated Tokamak Modelling - Task Force; the authors tried to evaluate their behaviour in a realistic emulation setup. © 2012 Elsevier B.V.
All Science Journal Classification (ASJC) codes
- Civil and Structural Engineering
- Nuclear Energy and Engineering
- Materials Science(all)
- Mechanical Engineering
Iannone, F., Podda, S., Bracco, G., Manduchi, G., Maslennikov, A., Migliori, S., & Wolkersdorfer, K. (2012). Parallel file system performances in fusion data storage. Fusion Engineering and Design, 87(12), 2063 - 2067. https://doi.org/10.1016/j.fusengdes.2012.02.075