Solver Command Line Interface¶
The M-Star solver is a console program that can be used directly from a command terminal. This can be useful for batch or scripted runs.
Solver Executables¶
The below table shows the available solver executables that provide differing levels of numerical precision and GPU support.
- Single Precision, GPU Support
Linux:
mstar-cfd-mgpu
Windows:
mstarcfd.gpu.sp.exe
This is default configuration used when you run the solver
- Double Precision, GPU Support
Linux:
mstar-cfd-mgpu-d
Windows:
mstarcfd.gpu.dp.exe
This configuration provides double precision on GPUs
- Double Precision, CPU-only Support
Linux:
mstar-cfd
Windows:
mstarcfd.mpi.exe
This configuration does not have GPU support; it only runs on the CPU. Certain features such as runtime expressions are not supported.
Command Line Usage Help¶
M-Star CFD (3.12.27)
Floating-Point Precision: Single
Usage: mstarcfd [options] [input file]
M-Star CFD options:
-h [ --help ] show help message
-v [ --version ] show version number
-i [ --input-file ] arg input filename
-o [ --output-dir ] arg (=out) output directory (will be created if does not
exist)
-r [ --restart-dir ] arg (=out) directory containing checkpoint to restart
from
-l [ --load ] arg load checkpoint number
--load-last load the last checkpoint
--queue-license queue license checkout
-s [ --show ] show available checkpoints
-p [ --preview ] preview mode
-f [ --force ] if output directory exists, it will be
removed prior to running
-u [ --unified-memory ] use CUDA unified memory
--disable-ipc disable use of CUDA IPC for GPU-GPU
communication
--disable-nccl disable use of NCCL for GPU-GPU communication
--gpu-auto Run on GPUs and have the code automatically
select which GPUs to run on. If using MPI the
code will try to find a unique GPU for each
MPI process. The automatic selection is done
by choosing the GPUs with the lowest CUDA GPU
IDs on each node. This will be influenced by
the CUDA_DEVICE_ORDER and
CUDA_VISIBLE_DEVICES environment variables if
set. CUDA_DEVICE_ORDER can be set to either
FASTEST_FIRST (which is the default and will
result in using the fastest GPUs for the
simulation) or PCI_BUS_ID (which orders GPUs
in the same way as the nvidia-smi utility).
CUDA_VISIBLE_DEVICES limits which GPUs are
available to this program.
--gpu-ids arg Run on GPUs by providing the IDs of the GPUs
you want to run on in a comma separated list
like "0,1,2,3" or just one ID to run on one
GPU. If using MPI the GPUs will be assigned
to MPI processes in the order provided (the
number of MPI processes must match the number
of GPU IDs). The IDs you provide correspond
to CUDA GPU IDs which are influenced by the
CUDA_DEVICE_ORDER and CUDA_VISIBLE_DEVICES
environment variables if set and so may be
different than the IDs displayed by the
nvidia-smi utility. CUDA_DEVICE_ORDER can be
set to either FASTEST_FIRST (which is the
default and means the fastest GPU will have
ID 0, the second fastest 1, etc) or
PCI_BUS_ID (which orders GPUs in the same way
as the nvidia-smi utility).
CUDA_VISIBLE_DEVICES limits which GPUs are
available to this program.
--gpu-uuids arg Run on GPUs by providing the UUIDs of the
GPUs you want to run on in a comma separated
list like "8507805d-bef7-ad07-5ed3-2cd6d1efb4
d1,8507805d-bef7-ad07-5ed3-2cd6d1efb4d2" or
just one UUID to run on one GPU. If using MPI
the GPUs will be assigned to MPI processes in
the order provided (the number of MPI
processes must match the number of GPU
UUIDs). The UUID of each GPU can be found
using "nvidia-smi -L".
Example Executions¶
Tip
Be sure to setup your M-Star license and PATH variables to access the M-Star solver commands. Also note that you may need to invoke mpiexec
or mpirun
base on your environment. On Windows you will need to call mpiexec
Run on 4 CPUs:
mpirun -np 4 mstar-cfd -i input.xml -o out
Run on 8 GPUs
mpirun -np 8 mstar-cfd-mgpu --gpu-auto -i input.xml -o out
Run on 8 GPUs and force OpenMPI communication:
mpirun -np 8 mstar-cfd-mgpu --gpu-auto --disable-ipc -i input.xml -o out
Run on a single GPU:
mstar-cfd-mgpu --gpu-ids=0 -i input.xml -o out
To run from the most recent check-point file, use the -r out --load-last
arguments
mpirun -np 8 mstar-cfd-mgpu -i input.xml -o out -r out --load-last
Alternatively, if you want to load from a checkpoint different from the last available checkpoint, you can execute:
mpirun -np 8 mstar-cfd-mgpu -i input.xml -o out -r out -l N
where N is the specific checkpoint file you wish to evoke.
Runtime Solver Process Control¶
While the solver is running, it can be instructed with a limited set of commands to write out checkpoint files, output data, and gracefully stop the simulation. This can invoked by create a file in the output directory (usually named out/
) in the case called out/SolverCommands.txt
. Each line in the file indicates an action the solver should take. The solver commands file can accept multiple commands in a single file. The solver will read this file, apply all the commands, then delete the SolverCommands.txt
file.
Command |
Description |
---|---|
|
Output volume data |
|
Output slice data |
|
Output particles data |
|
Output stats data |
|
Output a checkpoint |
|
Gracefully stop the simulation |
|
Gracefully stop the simulation |
Example: Write out a checkpoint and stop the simulation
output checkpoint
kill
Example: Write out stats, slice and volume data, let simulation continue
output slices
output volume
output stats
Queue for M-Star CFD license¶
If you are using a floating license, you may sometimes run into an issue where a license is not available. When this happens, the solver will report no licenses are available, and exit. To help address this constraint, you can use the license queueing capability. The solver will start up and wait for a license to become available. While the solver is waiting, there will be periodic notifications that the program is still waiting in the license queue. Use the --queue-license
command line argument to invoke this function. Note that solver runs started from the solver gui will automatically be run in this manner.
mstar-cfd-mgpu -i input.xml -o out --queue-license
Alternatively, you can invoke this same functionality using an environment variable. Set RLM_QUEUE=1
prior to starting the solver.
export RLM_QUEUE=1
mstar-cfd-mgpu -i input.xml -o out