Sunteți pe pagina 1din 4

Submit job for Fluent 13 in HPC

HPC has a 5 user 48 core (CPU) license for ANSYS 13. This means only 5 users at
a time with a total of 48 cores (among all 5 users) get to use the ANSYS package
at a time. For windows users, connect to the HPC using an ssh client.
Steps:
1.) Ready your Fluent case and data file (.cas and .dat) using the GUI on your PC
2.) Make the batch file for the job (for sample, see Annexure I). Make the
necessary changes:
file set-batch-opt y y y n
file start-transcript xyz123.trn
file read-case-data xyz123
solve init init-flow
solve it 100
file write-case-data xyz123_%i.gz
file stop-transcript
exit
Change fields in red as per need. If you want to use the autosave option while
making the case file in Fluent, then autosave it in /home/your-hpc-id/filename.
If you are reading only the case files , remove the data option in the 3rd line.
Save the above file with .jou extension. Check the file names given in the batch
file. They should be same as the Fluent case and data files. The name of the
batch file should be same as the one you put in your fluent-pbs-startup script.
3.) Make the pbs-script file (sample below in Annexure II). Here you can decide
upon how many cores do you need to run the Fluent case and the queue in which
you want to submit the job.
Remember the institute has license for 48 cores parallel Fluent.
a) To change the no. of cores:
Each node consists of 8 cores of processor. So, first change the no. of nodes in
the following line of the script file
#PBS -l nodes=1/2/3/4/5/6:ppn=8
choose the no. of nodes based upon your
Then, change the no. of cores requirements
in the yellow region of the following line (no. of
cores = no. of nodes * 8) cd /scratch/your-hpc-id
/opt/software/ansys_inc/v130/fluent/bin/fluent 3ddp -g -cnf=$PBS_NODEFILE -t32
-i sample-batch.jou

b) To change the queue for the job:

32 = 4*8, 4 being the no. of cores set


for the job

In the institute HPC, there are three queue's to submit the fluent job.

Queue
Small
Medium
Large

Range of
cores
8 - 32
32 - 96
96 -

Max. run
time
120 hrs
96 hrs
72 hrs

Since, the institute has max. of 48 cores for fluent, the maximum you can go is in
the Medium queue. Based upon your cores requirements, make the following
change in this particular line:
Small or Medium based upon your
requirement
#PBS -q medium
Upon making all the changes, save the file with the extension .dat

Procedure to run the job:


1.) Transfer all the 4 files (Fluent .cas & .dat, batch.jou & pbs-script.dat) in your
scratch directory
2.) Use the command :
dos2unix name-of-your-batch-file.jou
3.) Do the same for the pbs-script file
4.) Use the command to put the job in queue:
qsub name-of-the-pbs-script.dat
A job-id is now assigned to your case.

To check the status of your job, enter the command, qstat -u your-hpc-id
To delete a job, type qdel job-id
Remove unnecessary files using the rm command.

Annexure I
Sample batch files. remember to store these files with the extension .jou
a) For Steady Case :
file set-batch-opt y y y n
file start-transcript xyz123.trn
file read-case-data xyz123
solve init init-flow
solve it 100
No. of
file write-case-data xyz123_%i.gz
file stop-transcript iterations
exit
b) For Unsteady Case :
file set-batch-opt y y y n
file start-transcript xyz123.trn
No. of
file read-case-data xyz123
iterations
solve dual-time-iterate 3000 5e-06
file write-case-data xyz123_%i.gz
Time step
file stop-transcript
size
exit

Annexure II
Sample pbs-script file.
#PBS -l nodes=4:ppn=8
#PBS -l fluent=1
#PBS -l fluent_lic=48
#PBS -q medium
#PBS -V
#EXECUTION SEQUENCE
cd /scratch/your-hpc-id
/opt/software/ansys_inc/v130/fluent/bin/fluent 3ddp -g -cnf=$PBS_NODEFILE -t32
-i sample-batch.jou
#UNCOMMENT TO REMOVE THE UNWANTED FILES
#rm *_PBSin*
#rm *.env *.inp
#UNCOMMENT TO COMPRESS THE FILES
#find . -type f -exec gzip {} \;

S-ar putea să vă placă și