...
Codeblock | ||||||
---|---|---|---|---|---|---|
| ||||||
$ parallel --xapply echo {1} {2} ::: 1 2 3 ::: a b c 1 a 2 b 3 c $ parallel echo {1} {2} ::: 1 2 3 ::: a b c 1 a 1 b 1 c 2 a 2 b 2 c 3 a 3 b 3 c |
Doing local I/O tasks in parallel
To distribute data from a global location ($WORK, $HOME) to several nodes simultaneously - similar to a MPI_Bcast - one can use:
Codeblock | ||||
---|---|---|---|---|
| ||||
pdcp -r -w $SLURM_NODELIST $WORK/input2copy/testdir2copy* $LOCAL_TMPDIR |
$LOCAL_TMPDIR exists - only "node" locally - on all compute nodes (see Special Filesystems for more details).
To collect individual data from several node-local locations simultaneously - similar to a MPI_Gather - one can use:
Codeblock | ||||
---|---|---|---|---|
| ||||
rpdcp -r -w $SLURM_NODELIST $LOCAL_TMPDIR/output2keep/* $WORK/returndir |
Automatically, rpdcp will rename the data by appending the local hostname of its origin. This avoid avoids overwriting of files with the same name.
In the next example local output data ($LOCAL_TMPDIR) is moved back to $WORK while the main program is still running. With "&
" you send you the main program in the background.
...