Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 71 Aktuelle »

a Package for Computational Fluid Dynamics Simulations

Below you find explanations to obtain and check out product licenses and regarding support and training.

Functionality

Simulation of Turbulent flow in Arbitrary Regions - Computational Continuum Mechanics + (STAR-CCM+) is a C++ based finite-volume-based program package for modelling of fluid flow problems and beyond (it also includes FEM, etc.). It is developed by Siemens PLM Software (they bought CD-adapco). Today, the Simcenter STAR-CCM+ package can be applied to a wide range of multiphysics problems such as

  • Fluid dynamics
  • Conjugate heat transfer
  • Multiphase flows
  • Reacting flows
  • Solid mechanics (FEM)
  • Particle flows
  • Rheology
  • Electrochemistry
  • Electromagnetics
  • Aero-acoustics
  • Fluid-structure interaction

Conditions for Usage and Licensing

Our STAR-CCM+ modules are restricted to members of the adapco user group.

You can apply to become a group member at support@nhr.zib.de - if your usage purpose is research/teaching, or if your are a student. Projects that are financed by industrial partners are not allowed. To check if you are a UNIX group member you can type: groups

Within the core-h limits of Test Projects (see test account) we provide free Power On Demand (POD) keys for teaching/academic research of non-industry funded projects.
If you fulfil these conditions simply write to support@nhr.zib.de.

In order to run STAR-CCM+ you have to specify the parameter -podkey, as shown in the Example Jobscripts below.

Introduction

This page describes the specifics of installation and usage at NHR@ZIB systems only. A brief product overview is provided here. An introductory tutorial is accessible here.

Public documentation, tutorials and support - no registration needed

Everybody can access both the official User Guide (includes search function, click-by-click tutorials and files) and https://community.sw.siemens.com (anyone can contribute here). A PDF version of the User Guide is available on NHR@ZIB systems under /sw/eng/starccm/<version>/STAR-CCM+<version>/doc. Tutorial case files can be found in /sw/eng/starccm/<version>/STAR-CCM+<version>/doc/startutorialsdata resp. (with solutions) in /sw/eng/starccm/<version>/STAR-CCM+<version>/tutorials, verification data in /sw/eng/starccm/<version>/STAR-CCM+<version>/VerificationData.

Full support and training/courses - after registration

Students can register for free to search the entire support center (most extensive Q&A database of Siemens PLM) and attend the Xcelerator Academy for Academics (full self-paced courses for all levels).

Installed versions

To see available versions type:
module avail starccm

All versions use double precision. The recommended default module - providing all necessary environment settings - can be loaded with: module load starccm

Example Jobscripts

Genoa cluster - slurm startscript example
#!/bin/bash
#SBATCH --partition cpu-genoa:test
#SBATCH --time 01:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=192
#SBATCH --job-name=StarCCM

module load starccm

## create the host list for StarCCM+
srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > starhosts.${SLURM_JOB_ID}

export CDLMD_LICENSE_FILE=1999@flex.cd-adapco.com
export PODKEY=<type your podkey here - we can provide one for you - under the above mentioned terms>
export MYCASE=<type your sim file name>

## run starccm+
starccm+ -batch ${MYCASE} \
-power -podkey ${PODKEY} -licpath ${CDLMD_LICENSE_FILE} \
-np ${SLURM_NTASKS} \
-machinefile starhosts.${SLURM_JOB_ID} 

echo '#################### StarCCM+ finished ############'
rm starhosts.$SLURM_JOB_ID
A100 cluster - slurm startscript example
#!/bin/bash
#SBATCH --partition gpu-a100:test
#SBATCH --time 01:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8	# This needs to be an integer multiple of the GPUs per node.
#SBATCH --gres=gpu:4 # number of GPUs per node
#SBATCH --gpu-bind=single:2 # Recommended: pin each process to its own GPU (single:<ntasks_per_gpu>).
#SBATCH --job-name=StarCCM

## check if GPU offload device is available
nvidia-smi

module load starccm/19.04.007-r8

## create the host list for StarCCM+
srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > starhosts.${SLURM_JOB_ID}

export CDLMD_LICENSE_FILE=1999@flex.cd-adapco.com
export PODKEY=<type your podkey here - we can provide one for you - under the above mentioned terms>
export MYCASE=<type your sim file name>

## run starccm+
starccm+ -batch ${MYCASE} \
-power -podkey ${PODKEY} -licpath ${CDLMD_LICENSE_FILE} \
-np ${SLURM_NTASKS} -gpgpu auto \
-machinefile starhosts.${SLURM_JOB_ID} 

echo '#################### StarCCM+ finished ############'
rm starhosts.$SLURM_JOB_ID
CLX cluster - slurm startscript example
#!/bin/bash
#SBATCH --partition cpu-clx:test
#SBATCH --time 01:00:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=96
#SBATCH --job-name=StarCCM

module load starccm

## create the host list for StarCCM+
srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > starhosts.${SLURM_JOB_ID}

export CDLMD_LICENSE_FILE=1999@flex.cd-adapco.com
export PODKEY=<type your podkey here - we can provide one for you - under the above mentioned terms>
export MYCASE=<type your sim file name>

## run starccm+
starccm+ -batch ${MYCASE} \
-power -podkey ${PODKEY} -licpath ${CDLMD_LICENSE_FILE} \
-np ${SLURM_NTASKS} \
-machinefile starhosts.${SLURM_JOB_ID} -mpi intel

echo '#################### StarCCM+ finished ############'
rm starhosts.$SLURM_JOB_ID

GUI client - Server - Connection

Start remote server on any compute node (headless/without GUI), e.g.
srun -p cpu-clx:test --tasks-per-node 96 --pty bash
module load starccm
starccm+ -np 96 -server -collab -power -podkey $PODKEY

The hostname and port information displayed after the server startup is needed later (see "Server::start -host ..."). The "-power" flag is needed due to license reasons. The "-collab" flag is needed if your local and remote user name differ. "-collab" allows anyone to attach to your server, but do not worry: our system forbids others to access your allocated exclusive compute nodes (Beside you only root can ssh there.).  

Locally, the StarCCM client requires a password-less SSH connection, however du to security reasons NHR@ZIB allows password protected ssh keys only. To resolve this conflict on Linux/Mac systems run a ssh-agent in the background:

ssh-add ~/.ssh/your_private_key

Hence, you need to type your password only once. If you are on Windows using PuTTy please refer here.

Now you can start StarCCM locally, and connect via

with the following settings:

Here, you need to replace your username, the host name b####.usr.hlrn.de, and the port number #####, according to the screen output created after starting the remote server (see first step).

Known Issues

To check if the license server of CD-adapco is accessible type:

telnet flex.cd-adapco.com 1999

This test (Trying ...) is positiv if the server answers (within a second):
Escape character is '^]'
Typically, after some minutes, a negative test outcome is indicated by:
telnet ... Connection timed out


  • Keine Stichwörter