Skip to content

Cluster Maintenance Updates and News

Office Hours for 2026 Spring

For this spring, we offer drop-in office hours every Monday and Wednesday from 2:00–4:00 p.m. Stop by to meet with our student consultants and ask any questions you have about using HPC resources. Whether you’re just getting started or need help with a specific issue, feel free to bring your laptop to walk us through any problems you're facing. There's no need to create a ticket in advance; if follow-up is needed, the student consultants will open a ticket on your behalf, and you'll receive further instructions.

Consulting Hours

  • Date: Every Monday and Wednesday
  • Location: GITC 5302N
  • Time: 2:00 PM - 4:00 PM

Office Hours

We currently offer drop-in office hours every Tuesday and Friday from 2:00–4:00 p.m. Stop by to meet with our student consultants and ask any questions you have about using HPC resources. Whether you’re just getting started or need help with a specific issue, feel free to bring your laptop to walk us through any problems you're facing. There's no need to create a ticket in advance; if follow-up is needed, the student consultants will open a ticket on your behalf, and you'll receive further instructions.

Consulting Hours

  • Date: Every Tuesday and Friday
  • Location: GITC 2404
  • Time: 2:00 PM - 4:00 PM

MIG GPU Testing on Wulver Now Available

We’re excited to announce that MIG-enabled GPUs are now available on Wulver for workflow testing!

We currently have 4 GPUs configured with MIG (Multi-Instance GPU) profiles as follows:

  • 40 GB profile – 1 MIG instance per GPU
  • 20 GB profile – 1 MIG instance per GPU
  • 10 GB profile – 2 MIG instances per GPU

This effectively allows the 4 GPUs to perform as 16 GPUs of varying RAM sizes. These MIG instances allow you to run multiple workloads in parallel with dedicated GPU resources, improving efficiency for smaller jobs and testing scenarios.

Who can use them?

All Wulver users are welcome to test their workflows on these new MIG profiles. Using the debug_gpu partition no Service Units (SUs) will be charged for these jobs. Modify your batch scripts to include these directives:

#SBATCH --partition=debug_gpu
#SBATCH --qos=debug
#SBATCH --gres=gpu:a100_10g:1      # Change to 20g or 40g as needed
#SBATCH --time=59:00 # Debug_gpu partition has a 12 hour walltime limit
What should you do?
  • Read through the MIG documentation on the HPC website.
  • Test your GPU-enabled workflows with these MIG resources.
  • Verify that your job scripts and containers handle MIG devices correctly.
  • Share feedback on performance and any issues you encounter.

This is a testing phase, so configurations may change based on usage and feedback. For more details, check the MIG documentation.

Wulver Outage

As part of NJIT's migration to a new Virtual Machine (VM) platform, Nutanix from (now ultra expensive) VMWare, Wulver will undergo an unplanned but required downtime to migrate critical virtual infrastructure hosting head, login, Open OnDemand and Slurm nodes starting at 8:00 AM on Friday, August 29. The cluster will be unavailable until all migration work is completed.

  • Expected duration: Up to 12 hours (work may finish sooner)
  • Reason: Migration to the Nutanix VM platform

Important Information:

  • Any jobs submitted before the outage that would not finish in time will be held in the queue and will resume after the cluster is back online. Please plan your usage and submissions accordingly.
  • There is a minor risk that queued jobs will be lost during migration. We will monitor this and inform affected users, if necessary.
  • Updates will be provided if there is any change to the expected downtime window.

We apologize for any inconvenience and appreciate your understanding as we make this important upgrade.

Wulver Maintenance

Wulver will be out of service on Tuesday, September 9th, for OS and SLURM updates.

Maintenance Plans

  • Upgrade the OS: Upgrade from RHEL 8 to RHEL 9: This will resolve the glibc error users encounter when compiling the latest packages. For details, see FAQ.
  • Upgrade SLURM version.
  • Implement new SU calculation: This will allow users to consume fewer SUs when using a single GPU instead of all 4 GPUs on a node.
  • Implement MIG.
  • Add Lochness nodes for course-related usage.

HPC 2025 Spring Events

ARCS HPC invites you to our upcoming events. Please register for the events you plan to attend.

Introduction to Wulver: Getting Started

Save the Date

  • Date: Jan 22nd 2025
  • Location: Virtual
  • Time: 2:30 PM - 3:30 PM

Join us for an informative webinar designed to introduce NJIT's HPC environment, Wulver. This virtual session will provide essential information about the Wulver cluster, how to get an account, and allocation details.

Registration is now closed.

Introduction to Wulver: Accessing System & Running Jobs

Save the Date

  • Date: Jan 29th 2025
  • Location: Virtual
  • Time: 2:30 PM - 3:30 PM

This HPC training event focuses on providing the fundamentals of SLURM (Simple Linux Utility for Resource Management), a workload manager. This virtual session will equip you with the essential skills needed to effectively utilize HPC resources using SLURM.

Registration is now closed.