Batch Status

Summary

last updated: 13:09:02 25.01.2022

83 active nodes (67 used, 16 free)

5240 hw threads (3884 used, 1356 free)

29 running jobs, 114534:40:00 remaining core hours

6 waiting jobs, - waiting core hours

Nodes

toggle node display

Running Jobs (29)

color job queue user #proc #nodes ppn vmem_req vmem_used t_remain t_req t_used started jobname hosts
412466 hpc3 rberre2m 2496 39 64 187GB 199GB 3:59:22 6:00:00 2:00:38 11:08:23 job_xhpl.sh wr50,wr51,wr52,wr53,wr54,wr66,wr67,wr68,wr69,wr70,wr71,wr72,wr73,wr74,wr75,wr76,wr77,wr78,wr79,wr80,wr81,wr82,wr83,wr84,wr85,wr86,wr87,wr88,wr89,wr90,wr91,wr92,wr93,wr94,wr95,wr96,wr97,wr98,wr99
412458 gpu shikka2s 2 1 2 185GB 60GB 19:36:22 1:00:00:00 4:23:38 8:45:23 L3ICP5555 wr19
412454 gpu shikka2s 2 1 2 185GB 48GB 20:25:47 1:00:50:00 4:24:13 8:44:48 L3ICP191 wr12
412459 gpu shikka2s 2 1 2 185GB 75GB 20:37:13 1:00:50:00 4:12:47 8:56:14 L1ICP191 wr15
412371 hpc2 rberre2m 48 1 48 60GB 4GB 23:54:35 3:00:00:00 2:00:05:25 23.01.2022 13:03:36 job.sh wr28
412372 hpc2 rberre2m 48 1 48 60GB 4GB 23:54:35 3:00:00:00 2:00:05:25 23.01.2022 13:03:36 job.sh wr29
412370 wr13 rberre2m 272 1 272 60GB 10GB 23:54:36 3:00:00:00 2:00:05:24 23.01.2022 13:03:37 job13.sh wr13
412373 hpc2 rberre2m 48 1 48 60GB 4GB 23:55:05 3:00:00:00 2:00:04:55 23.01.2022 13:04:06 job.sh wr36
412374 hpc2 rberre2m 48 1 48 60GB 4GB 23:55:05 3:00:00:00 2:00:04:55 23.01.2022 13:04:06 job.sh wr39
412375 hpc2 rberre2m 48 1 48 60GB 4GB 23:55:05 3:00:00:00 2:00:04:55 23.01.2022 13:04:06 job.sh wr41
412376 hpc2 rberre2m 48 1 48 60GB 3GB 23:55:05 3:00:00:00 2:00:04:55 23.01.2022 13:04:06 job.sh wr32
412377 hpc2 rberre2m 48 1 48 60GB 3GB 23:55:05 3:00:00:00 2:00:04:55 23.01.2022 13:04:06 job.sh wr30
412378 hpc2 rberre2m 48 1 48 60GB 3GB 23:55:06 3:00:00:00 2:00:04:54 23.01.2022 13:04:07 job.sh wr35
412379 hpc2 rberre2m 48 1 48 60GB 3GB 23:55:06 3:00:00:00 2:00:04:54 23.01.2022 13:04:07 job.sh wr33
412380 hpc2 rberre2m 48 1 48 60GB 4GB 23:55:06 3:00:00:00 2:00:04:54 23.01.2022 13:04:07 job.sh wr37
412381 hpc2 rberre2m 48 1 48 60GB 3GB 23:55:06 3:00:00:00 2:00:04:54 23.01.2022 13:04:07 job.sh wr38
412382 hpc2 rberre2m 48 1 48 60GB 3GB 23:55:06 3:00:00:00 2:00:04:54 23.01.2022 13:04:07 job.sh wr31
412383 hpc2 rberre2m 48 1 48 60GB 4GB 23:55:06 3:00:00:00 2:00:04:54 23.01.2022 13:04:07 job.sh wr40
412384 hpc2 rberre2m 48 1 48 60GB 4GB 23:55:06 3:00:00:00 2:00:04:54 23.01.2022 13:04:07 job.sh wr42
412385 hpc2 rberre2m 48 1 48 60GB 3GB 23:55:07 3:00:00:00 2:00:04:53 23.01.2022 13:04:08 job.sh wr34
412456 gpu shikka2s 2 1 2 185GB 47GB 1:20:26:09 2:00:50:00 4:23:51 8:45:10 L3ICP155 wr17
412457 gpu shikka2s 2 1 2 185GB 47GB 1:20:26:16 2:00:50:00 4:23:44 8:45:17 L3ICP155s wr18
412455 gpu shikka2s 2 1 2 185GB 51GB 2:19:36:05 3:00:00:00 4:23:55 8:45:06 L3ICP101s wr16
412473 hpc dgromm3m 64 1 64 96GB 125GB 2:22:18:34 3:00:00:00 1:41:26 11:27:35 start_mpi.sh wr55
412474 hpc dgromm3m 64 1 64 96GB 125GB 2:22:19:34 3:00:00:00 1:40:26 11:28:35 start_mpi.sh wr56
412475 hpc dgromm3m 64 1 64 96GB 125GB 2:22:20:34 3:00:00:00 1:39:26 11:29:35 start_mpi.sh wr57
412476 hpc dgromm3m 64 1 64 96GB 125GB 2:22:21:34 3:00:00:00 1:38:26 11:30:35 start_mpi.sh wr58
412477 hpc dgromm3m 64 1 64 96GB 125GB 2:22:22:34 3:00:00:00 1:37:26 11:31:35 start_mpi.sh wr59
412478 hpc dgromm3m 64 1 64 96GB 125GB 2:22:22:34 3:00:00:00 1:37:26 11:31:35 start_mpi.sh wr60

Waiting/Blocked Jobs (6)

job queue user state #proc #nodes ppn vmem t_req prio enqueued waiting jobname wait reason
412460 gpu shikka2s PD 1 1 1 185GB 3:00:00:00 7056 8:56:21 4:12:40 L1ICP101s (Resources)
412461 gpu shikka2s PD 1 1 1 185GB 2:00:50:00 7055 8:56:29 4:12:32 L1ICP155 (Priority)
412462 gpu shikka2s PD 1 1 1 185GB 2:00:50:00 7054 8:56:35 4:12:26 L1ICP155s (Priority)
412463 gpu shikka2s PD 1 1 1 185GB 1:00:00:00 7053 8:56:41 4:12:20 L1ICP5555 (Priority)
412467 hpc3 rberre2m PD 2304 36 64 187GB 6:00:00 1410 11:08:26 2:00:35 job_xhpl.sh (Resources)
412468 hpc3 rberre2m PD 2112 33 64 187GB 6:00:00 1408 11:08:38 2:00:23 job_xhpl.sh (Priority)