Using the XSEDE Allocation

This is a set of brief instructions for getting lammps jobs up and running on stampede2. Far more comprehensive documentation and instructions can be found here:

  1. TACC Portal:

  2. Stampede2 User Guide:

Create XSEDE and STAMPEDE2 accounts

  1. Go to and create an account.

  2. Send me your username to me so I can add you to the allocation.


    • Make your password secure but easy to remember as you will use it every time you log in

    • You are required to use two factor authentication, so you should provide your cell phone number as you will use it to log in.

  1. Once you have been added to the project, on the website go to My XSEDE --> Accounts and note the entries under Local\ Username. This is your username to log onto XSEDE resources

  2. Go to and enter your password to create a password reset link. This will be the password you use to log onto stampede2.

Log onto STAMPEDE2

SSH onto stampede by

$> ssh

and enter your stampede2 password at the prompt. A verification code will be texted to your cell phone; enter it at the prompt.

You are now logged into stampede.

Important: do NOT run any high performance computations here, as this is just the front node. See subsequent sections for running jobs on stampede.

Copy any files you need onto stampede2 using scp:

$> scp myfile

Example: Compiling LAMMPS for stampede

Check out a copy of lammps into your home directory using git

$> git clone

Go to the source directory and use make to determine what packages are installed

$> cd lammps/src

$> make package-status

Installed YES: package ASPHERE

Installed YES: package BODY

Installed YES: package CLASS2

Installed YES: package COLLOID

Installed YES: package COMPRESS

Installed YES: package CORESHELL

Installed YES: package DIPOLE

Installed NO: package GPU


Add and remove packages by using make yes-package, make no-package. For example:

$> make yes-ASPHERE

Installing package ASPHERE

$> make no-ASPHERE

Uninstalling package ASPHERE

(To see the configuration on lubbock, use the make package-status command in /opt/lammps/src)

Make sure to uninstall the USER-INTEL, USER-SMD, and USER-KIM libraries:


Build lammps for stampede2 by

$> make stampede

The resulting executable is in the src directory and is called lmp_stampede

Example: Running LAMMPS on stampede2

In your run directory, create a file (e.g. Use the following as an example:


#SBATCH -J PVDF_MD # job name

#SBATCH -o PVDF_MD_output_%j # output file name (%j expands)

#SBATCH -N 2 # total number of nodes

# (use 64 MPI tasks per node)

#SBATCH -n 128 # total number of mpi tasks requested

# (should be less than or equal to N*64)

#SBATCH -p development # normal/development/etc.

# (use development unless production run)

#SBATCH -t 01:30:00 # run time (hh:mm:ss) - 1.5 hours


#SBATCH --mail-type=begin # email me when the job starts

#SBATCH --mail-type=end # email me when the job finishes

ibrun ~/lammps/src/lmp_stampede < in.deform.pvdf.txt


  • The #SBATCH lines, though commented out, are directives for the slurm scheduler.

  • You will replace ~/lammps/src/lmp_stampede and in.deform.pvdf.txt with the location of your complied lammps code and input file.

  • Do not use mpirun. ibrun will take care of all parallelism

  • Try to estimate your run time accurately. If you ask for too long you'll have a long wait before your job starts. If you ask for too little your job may get cut off prematurely.

Submit your job by

$> sbatch

You can check the status of your job by logging into If you selected --mail-type=begin and --mail-type=end you will also receive email notifications.

Copying files to/from stampede2

Use scp to transfer files to/from stampede2. Suppose you have a file called myfile.txt in your home directory on stampede2. To copy this file to your computer,

$> scp .

To copy an entire directory (e.g. a directory called mydir in your home directory use the -r flat:

$> scp -r .

To copy a file myfile2.txt from your computer to stampede2, just switch the order

$> scp myfile2.txt

and similarly for directories.