GCP Marketplace

Create a Slurm-GCP Deployment

1. Find the Fluid Numerics Slurm-GCP marketplace deployment

Go to https://console.cloud.google.com/marketplace and search for Slurm-GCP.... Click "Launch on Compute Engine"

2. Set the deployment name, zone, and network

The deployment name is used as an identifier in deployment-manager for your cluster. Additionally, the deployment name prefixes all compute instances in your cluster. For example, if the deployment name is set to 'fluid-slurm', instances will have the following names

  • Controller Node : fluid-slurm-controller
  • Login Node(s) : fluid-slurm-login*
  • Compute Nodes : fluid-slurm-compute-*

Select the zone where you want the controller node, login node(s), and first compute partition to reside. If you've created your own network and subnetwork, make sure that the subnet resides in the region containing this zone.

Select the network and subnetwork you want to deploy your cluster's resources within. You can use the default network, or create your own network prior to launching Slurm-GCP. Leave the External IP setting as "Ephemeral".

3. Configure the Login node and Controller nodes

Fluid Numerics Slurm-GCP permits multiple login nodes. Set the Instance Count to the number of login nodes you need. Set the machine type, boot disk type, and boot disk size ( in GB ) for the login nodes. Keep in mind that the machine type impacts the peak network bandwidth. The login node boot disks only host the operating system, by default. User home directories are NFS mounted by the Controller node, by default.

Fluid Numerics Slurm-GCP deploys only one controller node when deployed through marketplace. By default, the controller hosts the slurm database and serves the /apps and /home directories to the login and compute nodes. Although 4 vCPUs is sufficient for one user running a few jobs simultaneously, Fluid Numerics recommends 32 vCPUs for teams of 3-6.

If you plan to serve /home from the controller, increase the boot disk size to meet your teams storage needs on the cluster. Alternatively, you can configure your cluster to mount /home from another network attached storage system after deployment.

Contact support if you'd like to use CloudSQL to host your slurm database.

4. Configure the default compute partition

When you launch via marketplace, you will set up your first compute partition for Slurm-GCP. Give the partition a meaningful name. This name will be referenced by users when submitting jobs to run on the default compute nodes. In this tutorial, it is called "partition-1"

The static node count determines how many compute nodes in the default partition will be created with this deployment. Further, these static nodes will not be taken down by Slurm's powersave module. The max node count determines the maximum number of nodes in this partition that can be active at any given time. When the static node count is less than the max node count, slurm will create ephemeral compute nodes to execute workloads. Ephemeral compute nodes are deleted after they become idle.

Choose the machine type, GPU type and count to meet cluster user's workload needs. Reach out to Fluid Numerics support if you need help profiling software to determine ideal compute partition machine specifications.

Click on "More" to reveal advanced settings.

If you'd like to use local SSD's, you can add up to 8 local SSD's per compute node. Each local SSD is 375 GB in size. Slurm-GCP configures local SSDs to mount via NVMe interface and combines multiple SSDs to create a single logical volume (RAID0). Set the scratch directory to the path where you want the local SSD's mounted. Slurm-GCP is configured so that all users have access to this directory during job execution.

You can set the Max Wall Time as 'INFINITE' ( Default ), implying users have no time restrictions on jobs executed in the default partition. Alternatively, you can set the max wall time in the format 'days-hours:minutes:seconds', e.g. 1-12:00:00 specifies a wall clock limit of 1 day, 12 hours.

Check the box "Disable Hyperthreading" to disabled hyperthreaded virtual cores. When enabled, Slurm will see a core count that is half the number of vCPUs. This is an experimental feature meant for fine tuning the performance of memory-bandwidth bound HPC applications.

Check the box "Preemptible Bursting" if you'd like ephemeral compute nodes in the default partition to be marked as preemptible instances. Make sure that applications running in this partition can recover from node preemption.

5. Click Deploy !

6. Configure Slurm Users and Test job submission

When your deployment is ready, click the ssh button on the deployment-manager page. This will open a terminal through your web browser on the first login node.

Once you have successfully ssh'd into the login node, add users using the cluster-services add user utility.

[user@login0]$ sudo su
[root@login0]$ cluster-services add user --name=[POSIX-USERNAME]
[root@login0]$ exit

Replacing [POSIX-USERNAME] with your username. Alternatively, you can set up more complex slurm accounting with sacctmgr.

Now, try submitting a simple job step with slurm that returns the hostname of the compute node assigned the job. Keep in mind that, you will need to specify the compute partition to submit to.

[user@login0]$ srun --partition=partition-1 -n1 hostname
fluid-slurm-compute-000-0000

If you have static compute nodes, the job should execute within 30 seconds. Ephemeral compute nodes can take up to 90 seconds to execute.

Next Steps

Now that your cluster is up and you've verified that the default compute partition is running, you can now start customizing your deployment even further.

Understand IAM Roles and Permissions for Slurm-GCP

Add Compute Partitions

Add Slurm Users with cluster-services

Add Network Attached Storage with cluster-services

Serve your home directories from Filestore

Need a more custom solution ?

Does the marketplace Slurm-GCP not meet your needs exactly ? Fluid Numerics specializes in customizing and integrating boutique solutions that fit your HPC and scientific computing needs. Contact us for a free consultation!