Running the Solver

Using the M-Star Solver on Windows

Overview

Once a model is configured, users can choose to:

  1. Export Solver Files,

  2. Run Solver,

Export Solver Files

Create a package of files that can be executed using M-Star Solve. This package contains and .xml and .stl files that describing the runtime parameters and simulation geometry, respectively.

Run Solver

Executes the simulation on the same machine used to build the model.

There are two basic modes of executing the M-Star CFD Solver, CPU and GPU mode. In CPU mode, only the CPU processers are used. In GPU mode, the specified GPU accelerators will be used to execute the solver.

Note that the number of cores allocated to the simulation should be anchored to the number of physical processors, not the number of hyperthreaded cores.

Requirements for Multi GPU usage on Windows:

  • The GPU hardware must have “peer-access”. This lets the GPUs communicate with each other. The Solver GUI will check this requirement before running.

  • The GPU hardware must be configured in TCC mode which means any GPU used in a solver calculation cannot be currently used for any display. This may require using the nvidia-smi utility to configure a given GPU.

  • The GPU hardware must have the same processor architecture

Using the M-Star Solver on Linux

Solver Executables

Table 2 Solver exe names

exe name

precision

CPU/GPU

Note

mstar-cfd

double

CPU

mstar-cfd-mgpu

single

GPU

preferred

mstar-cfd-mgpu-d

double

GPU

Unless required in your case, it is recommended to invoke the mstar-cfd-mgpu executable for single precision GPU usage.

Example Execution

In this sample, we will set the license file and execute the solver over 4 processors:

# Set the license file
export mstar_LICENSE=/home/user/mstar.lic

# Initialize the M-Star environment
# Be sure to also load in your full OpenMPI environment as well
source /home/user/mstarcfd/mstar.sh

# Go to a case folder
cd /home/user/case1

# Run the solver in CPU mode on 4 processors
mpirun -np 4 mstar-cfd -i input.xml -o out

# Run the solver in GPU mode using 8 GPUs. Note that the -np argument to mpirun should match the number of physical GPU devices
# Note that we are using the mstar-cfd-mgpu executable which requires MPI to be compiled with CUDA support
# Putting all output (standard and error stream) in a log file
# --gpu-auto  : Selects gpus automatically
# --disable-ipc :  Rely on OpenMPI for GPU-GPU point to point communication
mpirun -np 8 mstar-cfd-mgpu --gpu-auto --disable-ipc -i input.xml -o out

# Run the solver on a single GPU, specifying the solver to use the GPU with ID=0
mstar-cfd-mgpu --gpu-ids=0 -i input.xml -o out

Full program usage:

Usage: mstarcfd [options] [input file]

M-Star CFD options:
-h [ --help ]                   show help message
-v [ --version ]                show version number
-i [ --input-file ] arg         input filename
-o [ --output-dir ] arg (=out)  output directory (will be created if does not
                                exist)
-r [ --restart-dir ] arg (=out) directory containing checkpoint to restart
                                from
-l [ --load ] arg               load checkpoint number
--load-last                     load the last checkpoint
-s [ --show ]                   show available checkpoints
-f [ --force ]                  if output directory exists, it will be
                                removed prior to running
--disable-ipc                   disable automatic use of interprocess
                                communication for GPU simulations
--gpu-auto                      Run on GPUs and have the code automatically
                                select which GPUs to run on. If using MPI the
                                code will try to find a unique GPU for each
                                MPI process. The automatic selection is done
                                by choosing the GPUs with the lowest CUDA GPU
                                IDs on each node. This will be influenced by
                                the CUDA_DEVICE_ORDER and
                                CUDA_VISIBLE_DEVICES environment variables if
                                set. CUDA_DEVICE_ORDER can be set to either
                                FASTEST_FIRST (which is the default and will
                                result in using the fastest GPUs for the
                                simulation) or PCI_BUS_ID (which orders GPUs
                                in the same way as the nvidia-smi utility).
                                CUDA_VISIBLE_DEVICES limits which GPUs are
                                available to this program.
--gpu-ids arg                   Run on GPUs by providing the IDs of the GPUs
                                you want to run on in a comma separated list
                                like "0,1,2,3" or just one ID to run on one
                                GPU. If using MPI the GPUs will be assigned
                                to MPI processes in the order provided (the
                                number of MPI processes must match the number
                                of GPU IDs). The IDs you provide correspond
                                to CUDA GPU IDs which are influenced by the
                                CUDA_DEVICE_ORDER and CUDA_VISIBLE_DEVICES
                                environment variables if set and so may be
                                different than the IDs displayed by the
                                nvidia-smi utility. CUDA_DEVICE_ORDER can be
                                set to either FASTEST_FIRST (which is the
                                default and means the fastest GPU will have
                                ID 0, the second fastest 1, etc) or
                                PCI_BUS_ID (which orders GPUs in the same way
                                as the nvidia-smi utility).
                                CUDA_VISIBLE_DEVICES limits which GPUs are
                                available to this program.
--gpu-uuids arg                 Run on GPUs by providing the UUIDs of the
                                GPUs you want to run on in a comma separated
                                list like "8507805d-bef7-ad07-5ed3-2cd6d1efb4
                                d1,8507805d-bef7-ad07-5ed3-2cd6d1efb4d2" or
                                just one UUID to run on one GPU. If using MPI
                                the GPUs will be assigned to MPI processes in
                                the order provided (the number of MPI
                                processes must match the number of GPU
                                UUIDs). The UUID of each GPU can be found
                                using "nvidia-smi -L".

Example PBS script

Here is an example of a PBS job script that executes the solver over 2 nodes with 16 processors each and a max walltime of 24 hours.:

#!/bin/bash
#PBS -l nodes=2:ppn=16
#PBS -l walltime=24:00:00
#PBS -N dmt-job

export mstar_LICENSE=/home/user/dmt.lic
source /home/user/mstar/mstar.sh
cd $PBS_O_WORKDIR
mpirun mstar-cfd -i input.xml -o out > log.txt 2>&1

Platform Test

To verify the Solver is installed correctly, you will want to verify a few things and run a simple simulation. Here is a check list to run through -

Solver is installed

Verify the solver executable and mpirun are present in the target install folder under the bin sub-directory.

Solver license has been setup

Check that your environment contains the definition of the mstar_LICENSE variable, or that the license file is in the same directory as the dmt executable.

Obtain a simple test case for testing

The download page for each released version includes a cases.tgz file that contains prepared simulation files ready to execute. You may also request a test case by emailing mailto:support@mstarcfd.com , orby generating one yourself with the M-Star GUI. The examples can be generated from the GUI by going to File-New from Template.

Run the test case

Execute the test case

View results in Paraview

If Paraview is installed to your Linux machine, you may open Paraview and immediately view results. Otherwise, copy the out folder to your local machine and open Paraview there.

Restarting a Simulation

Windows

Within Windows environments, open the “M-Star Solve” from the Windows start menu. You will be prompted to browse to the case directory corresponding to the simulation you want to restart. Once loaded, the interface will populate a list of all checkpoint files associated with this case, as stored in the associated checkpoint directory. Select the checkpoint file from which you would like to restart and execute by clicking the Run button.

Linux

Within Linux environments, you will need to give the solver a command line instruction to restart from a checkpoint. For example, consider the default submission command:

mpirun -np 8 mstar-cfd -i input.xml -o out > log.txt 2>&1

To run from the most recent check-point file, modify the command as follows:

mpirun -np 8 mstar-cfd -i input.xml -o out -r out --load-last > log.txt 2>&1

Alternatively, if you want to load from a checkpoint different from the last available checkpoint, you can execute:

mpirun -np 8 mstar-cfd  -i input.xml -o out -r out -l N > log.txt 2>&1

where N is the specific checkpoint file you wish to evoke.