Batch Status

Summary

last updated: 06:58:01 19.01.2020

82 active nodes (28 used, 54 free)

4984 cores (1758 used, 3226 free)

20 running jobs, 126080:00:00 remaining core hours

1 waiting jobs, - waiting core hours

Nodes

toggle node display

Running Jobs (20)

color job queue user #proc #nodes ppn vmem t_remain t_req t_used started jobname hosts
272394 any kkirsc3m 8 1 8 10GB 2:41:33 10:00:00 7:18:27 18.01.2020 23:39:34 01.run wr20
271988 any agaier2m 64 1 64 120GB 8:23:45 3:00:00:00 2:15:36:15 16.01.2020 15:21:46 vae_hex wr80
272094 gpu sbhand2s 2 1 2 50GB 14:22:43 3:00:00:00 2:09:37:17 16.01.2020 21:20:44 ds_0.1.sh wr17
272161 gpu mwasil2s 62 1 62 32GB 1:05:25:03 3:00:00:00 1:18:34:57 17.01.2020 12:23:04 tf_job wr17
272093 gpu mwasil2s 64 1 64 64GB 1:09:47:46 3:00:00:00 1:14:12:14 17.01.2020 16:45:47 tf_job wr12
272311 wr13 rberre2m 272 1 272 60GB 1:10:48:45 3:00:00:00 1:13:11:15 17.01.2020 17:46:46 job13.sh wr13
272312 hpc3 bsenne2s 64 1 64 185GB 1:11:03:17 3:00:00:00 1:12:56:43 17.01.2020 18:01:18 DNMA_Dread_Analysis wr52
272314 gpu sbhand2s 2 1 2 90GB 2:03:16:55 3:00:00:00 20:43:05 18.01.2020 10:14:56 ds_0.3.sh wr18
272315 gpu sbhand2s 2 1 2 90GB 2:03:17:00 3:00:00:00 20:43:00 18.01.2020 10:15:01 ds_0.4.sh wr19
272347 hpc dgromm3m 192 3 64 96GB 2:04:18:47 3:00:00:00 19:41:13 18.01.2020 11:16:48 start_mpi.sh wr76,wr77,wr78
272348 hpc dgromm3m 192 3 64 96GB 2:04:22:13 3:00:00:00 19:37:47 18.01.2020 11:20:14 start_mpi.sh wr81,wr82,wr83
272349 hpc dgromm3m 192 3 64 96GB 2:04:23:09 3:00:00:00 19:36:51 18.01.2020 11:21:10 start_mpi.sh wr84,wr85,wr86
272350 hpc dgromm3m 192 3 64 96GB 2:04:26:37 3:00:00:00 19:33:23 18.01.2020 11:24:38 start_mpi.sh wr87,wr88,wr89
272351 hpc dgromm3m 192 3 64 96GB 2:04:27:35 3:00:00:00 19:32:25 18.01.2020 11:25:36 start_mpi.sh wr90,wr91,wr92
272352 hpc dgromm3m 192 3 64 96GB 2:04:28:29 3:00:00:00 19:31:31 18.01.2020 11:26:30 start_mpi.sh wr94,wr95,wr96
272356 gpu4 mmuthu2s 16 1 16 45GB 2:06:53:07 3:00:00:00 17:06:53 18.01.2020 13:51:08 Itr_19_gene0.sh wr15
272357 gpu4 mmuthu2s 16 1 16 45GB 2:06:53:07 3:00:00:00 17:06:53 18.01.2020 13:51:08 Itr_19_gene3.sh wr15
272358 gpu4 mmuthu2s 16 1 16 45GB 2:06:53:07 3:00:00:00 17:06:53 18.01.2020 13:51:08 Itr_19_gene2.sh wr15
272359 gpu4 mmuthu2s 16 1 16 45GB 2:06:53:07 3:00:00:00 17:06:53 18.01.2020 13:51:08 Itr_19_gene1.sh wr15
272346 gpu sbhand2s 2 1 2 100GB 2:17:14:23 3:00:00:00 6:45:37 0:12:24 ds_0.2.sh wr16

Waiting/Blocked Jobs (1)

Jobs with any problems are highlighted. Check for these jobs whether your resource request can be satisfied by nodes in the queue (most probably not!).

job queue user state #proc #nodes ppn vmem t_req prio enqueued waiting jobname wait reason
272364 gpu sbhand2s PD 1 1 1 100GB 3:00:00:00 7336 18.01.2020 15:48:17 15:09:44 ds_infer_knn.sh (Resources)