Fluid Numerics Journal

We have internal conversations in the office and within ourselves every single day that we would like to express to the world. In an effort to engage with the scientific community it is currently an unwritten workflow that you complete your study, generate a paper, find a journal, and submit the publication for peer review which ultimately could be released to the community at large. Throughout our experience with this workflow we have uncovered a need to maintain public exposure to encourage a public forum prior to if ever releasing to a journal of science.

The opportunity to make progress in science will be accelerated greatly if we are able to collaborate and iterate continuously. We don't want to send you periodicals, we want to engage your imagination and possibly inspire action or aspiration. Please subscribe to Fluid Numerics: The Journal to receive updates on us as a team and our efforts within our domains.   

For this work, we focus on the interFoam application, included with Open Source Field Operation And Manipulation(OpenFOAM), and use a higher resolution version of the Dam Break simulation. The resolution is increased, relative to the Dam Break test case included with OpenFOAM, by modifying the blockMeshDict, directly adding additional cells in each region of the model domain. The resulting mesh has about 2.8 Million grid cells.

We have put together a benchmarking study for Weather Research and Forecasting Model(WRF) v4 on Google Cloud complete with a click-to-deploy reproducible workflow for you to perform this study or run your own model on your own.

In this article, we expose some potential models for approaching the calculation of costs included in a Service Unit based on resource cost calculation and estimation. Maybe the title should  read “$ and ¢ of RCC billing”. 

You may be aware that Fluid Numerics is actively engaged in studying the dynamics of the Gulf Stream. In this work, we are using the MIT general circulation model to conduct a series of downscaling simulations focused on the eastern seaboard of the United States. We started this work by running a few benchmarks on Google Cloud Platform and on systems available at our colleague's department at Florida State University.

Today, I wanted to share some of the progress we've made and talk about how we are using the Cloud CFD solution to run our MITgcm simulations, process MITgcm output to VTK, post-process and visualize simulation output with Paraview, and monitor simulation diagnostics with Big Query and Datastudio.  

If you've read some of my other posts, you're aware I'm in the midst of refactoring and updating/upgrade SELF-Fluids. On the upgrade list, I'm planning a swap-out of the CUDA-Fortran implementation for HIP-Fortran, which will allow SELF-Fluids to run on both AMD and Nvidia GPU platforms. This journal entry details a portion of the work I've been doing to understand how some of the core routines in SELF-Fluids will perform across GPU platforms with HIP.

Despite popular belief, Fortran is alive and well. In the past I've been asked questions like "Who even writes libraries in Fortran?" or (my favorite) "Why aren't you writing this in C?" Though a publication on age demographics of Fortran developers is still wanting, their is definitely a perception that we're an older group. I'll speak for myself in the hopes to embolden other Fortran developers to stand-up for this incredible compiled language that still drives a large portion of scientific computing. In this article, I share my experience in using the open source JSON-Fortran uuring a refactor and upgrade of SELF-Fluids.

Fluid Numerics is starting to put together data-sets and comprehensive tool-kits to help characterize network, compute, memory, and disk performance on cloud and on-premise systems. This article dives into the beginning development stages for the OSU benchmarks to assess point-to-point latency and bandwidth on GCP.

Joe is culling some of his notes from teaching at the Parallel Computing Summer Research Internship and in undergraduate courses on parallel programming. This article covers some basics of MPI for data-parallel applications and some simple models for understanding weak and strong scaling. Click here to read the journal entry

Dr. Schoonover has been engaged to continue his work with Florida State University using MITgcm to potentially simulate and forecast Gulf Stream Separation from the American coastline. He is currently logging relevant results from his research to carry workloads from on-premise resources to cloud. This includes benchmarking and cost analysis for MITgcm on Google Compute Engine in order to understand a comparison model for application operational facility. Click here to read the journal entry

Joe has started a running comparison between On-Premise and Cloud Costs for High Performance Computing and the results could be helpful to scientific research teams that are approaching needs on cloud due to on-premise resource limitations. As science grows so do the resources necessary to conduct it. We are here to help understand this new opportunity to perform HPC workloads with confidence and clarity to the costs involved. Click here to read the journal entry