There have been system slowdowns on Eagle recently due to users launching large jobs out of the /home filesystem, particularly jobs calling large conda environments. Please avoid launching large jobs from /home and consider moving your conda environments to your /projects directory before launching a multi-node job. The /home filesystem is not designed for the same high level of input/output operations that the Lustre-based /projects filesystem is built for. The slowdown that can result from a large job in /home can dramatically increase the job runtime, costing AU and possibly causing timeout failures, as well as impact other users and their jobs with system slowdowns, timeouts, and module loading errors.
Announcements
Read announcements for NLR’s high-performance computing (HPC) system users.
New to HPC - Get Help
Oct. 16, 2022
Support and Help Contact information and sites available in this announcement.
CSC Tutorials Team and Channels
Oct. 16, 2022
Staff in the Computational Science Center host multiple tutorials and workshops on various computational science topics throughout the year, such as Visualization, Cloud, HPC, and others.
In Microsoft Teams, a “Computational Sciences Tutorials” public team was just created to be the hub for all such tutorials and workshops. Benefits to using the the team include the following:
Workaround for Windows SSH Users
May 4, 2022
Some people who use Windows 10/11 computers to ssh to Eagle from a Windows command prompt, powershell, or via Visual Studio Code's SSH extension have received a new error message about a "Corrupted MAC on input" or "message authentication code incorrect." Here's how to fix this issue.
Slurm Fairshare Refresher
May 7, 2021
FY21 saw the introduction of the "fairshare" priority algorithm in Eagle's job scheduler, Slurm. Queue times have been high during the Q2-Q3 rush and we've received some questions, so here's a quick refresher on Fairshare and what it means in regards to job scheduling.
Elevate your work with new tracking for Advanced Computing in the NREL Publishing Tracker
March 3, 2021
There is a new question on the User Facilities & Program Areas page when you enter a publication into the Pub Tracker – “The High Performance Computing Facility was used to produce results or data used in this publication.” Please be sure to check Yes on this question for your work that made use of the HPC User Facility or other systems in the ESIF HPC Data Center. In addition, there are three new Program Areas to use to tag your publication under the Advanced Computing heading: Cloud, HPC and Visualization & Insight Center. Making use of these metadata will enable us to elevate your work through communications highlights, feature stories, and reporting to EERE.
More information about the NREL Publishing Tracker can be found by visiting the Access and Use the NREL Publishing Tracker page on the Source.
Node Use Efficiency
Aug. 21, 2019
When building batch scripts it is advisable to first become familiar with the capabilities offered by the Eagle nodes. In creating your batch scripts, please keep in mind the memory capacities of the nodes, the type of cores available and to be aware that running multiple tasks on each node or the use of job arrays may assist in using your node hours more effectively. Further, assign the memory requirement for proper node type and process management based on the capability of differing nodes. Some Slurm options that you might consider are:
Share
Last Updated Feb. 5, 2026