The GFI computing system: Difference between revisions

From gfi
(Created page with "The Geophysical Institute has acquired a new computing system named cyclone.hpc.uib.no which replaces the system skd-cyclone.klientdrift.uib.no from Summer 2018. The computin...")
 
No edit summary
Line 4: Line 4:


- Intel Xeon Gold 6140M, 18 cores, 72 threads, 2.3GHz, 25MB L3 cache
- Intel Xeon Gold 6140M, 18 cores, 72 threads, 2.3GHz, 25MB L3 cache
- 1.5 TB DDR4-2666 memory
- 1.5 TB DDR4-2666 memory
- 2 NVIDIA Tesla GPU, 12 GB
- 2 NVIDIA Tesla GPU, 12 GB


Line 10: Line 12:
Here are the key facts about using the new system
Here are the key facts about using the new system


For transition, you will need to recompile your code (if it is compiled code) for the operating system CentOS linux. This is the same operating system as on the UiB high-performace compute system Hexagon.  
- For transition, you will need to recompile your code (if it is compiled code) for the operating system CentOS linux. This is the same operating system as on the UiB high-performace compute system Hexagon.  


There is no queue system on the new cyclone. Users can submit to the hexagon queue from cyclone (acts as login node).
- There is no queue system on the new cyclone. Users can submit to the hexagon queue from cyclone (acts as login node).


There is a 30-50% resource limitation per user. This will keep individual users from accidentally brining down the system.
- There is a 30-50% resource limitation per user. This will keep individual users from accidentally brining down the system.


Different software configurations cand be activated using the command module
- Different software configurations cand be activated using the command module


Access to data storage will be maintained using existing paths, such as /Data/gfi.
- Access to data storage will be maintained using existing paths, such as /Data/gfi.


The same and additional software packages are available (your suggestions?)
- The same and additional software packages are available (your suggestions?)


The system is maintained by the experts from UiB's HPC group. This should be an advantage for us.
- The system is maintained by the experts from UiB's HPC group. This should be an advantage for us.

Revision as of 16:02, 17 August 2018

The Geophysical Institute has acquired a new computing system named cyclone.hpc.uib.no which replaces the system skd-cyclone.klientdrift.uib.no from Summer 2018.

The computing performance has been enhanced substantially: The new system is a Dell PowerEdge R740 Server configuration with the following characteristics:

- Intel Xeon Gold 6140M, 18 cores, 72 threads, 2.3GHz, 25MB L3 cache

- 1.5 TB DDR4-2666 memory

- 2 NVIDIA Tesla GPU, 12 GB


Here are the key facts about using the new system

- For transition, you will need to recompile your code (if it is compiled code) for the operating system CentOS linux. This is the same operating system as on the UiB high-performace compute system Hexagon.

- There is no queue system on the new cyclone. Users can submit to the hexagon queue from cyclone (acts as login node).

- There is a 30-50% resource limitation per user. This will keep individual users from accidentally brining down the system.

- Different software configurations cand be activated using the command module

- Access to data storage will be maintained using existing paths, such as /Data/gfi.

- The same and additional software packages are available (your suggestions?)

- The system is maintained by the experts from UiB's HPC group. This should be an advantage for us.