Batch Status

Summary

last updated: 05:36:01 16.12.2018

82 active nodes (17 used, 65 free)

4984 cores (1012 used, 3972 free)

18 running jobs, 54816:00:00 remaining core hours

2 waiting jobs, - waiting core hours

Nodes

toggle node display

Running Jobs (18)

color job queue user #proc #nodes ppn vmem t_remain t_req t_used started jobname hosts
38581 wr13 rberre2m 272 1 272 60GB 4:32:38 6:00:00 1:27:22 4:08:39 job13.sh wr13
38349 hpc dgromm3m 64 1 64 64GB 1:06:27:50 3:00:00:00 1:17:32:10 14.12.2018 12:03:51 start_mpi.sh wr56
38621 hpc1 mmuthu2s 2 1 2 40GB 1:16:01:10 2:00:00:00 7:58:50 15.12.2018 21:37:11 train_sem_multi.sh wr20
38622 hpc1 mmuthu2s 2 1 2 40GB 1:16:01:24 2:00:00:00 7:58:36 15.12.2018 21:37:25 train_sem_binary.sh wr20
38626 hpc2 rberre2m 48 1 48 60GB 2:19:22:28 3:00:00:00 4:37:32 0:58:29 job.sh wr28
38627 hpc2 rberre2m 48 1 48 60GB 2:19:22:31 3:00:00:00 4:37:29 0:58:32 job.sh wr29
38628 hpc2 rberre2m 48 1 48 60GB 2:19:22:34 3:00:00:00 4:37:26 0:58:35 job.sh wr30
38629 hpc2 rberre2m 48 1 48 60GB 2:19:22:34 3:00:00:00 4:37:26 0:58:35 job.sh wr31
38630 hpc2 rberre2m 48 1 48 60GB 2:19:22:37 3:00:00:00 4:37:23 0:58:38 job.sh wr32
38631 hpc2 rberre2m 48 1 48 60GB 2:19:22:37 3:00:00:00 4:37:23 0:58:38 job.sh wr33
38632 hpc2 rberre2m 48 1 48 60GB 2:19:22:37 3:00:00:00 4:37:23 0:58:38 job.sh wr34
38633 hpc2 rberre2m 48 1 48 60GB 2:19:22:40 3:00:00:00 4:37:20 0:58:41 job.sh wr35
38634 hpc2 rberre2m 48 1 48 60GB 2:19:22:40 3:00:00:00 4:37:20 0:58:41 job.sh wr36
38635 hpc2 rberre2m 48 1 48 60GB 2:19:22:43 3:00:00:00 4:37:17 0:58:44 job.sh wr37
38636 hpc2 rberre2m 48 1 48 60GB 2:19:22:43 3:00:00:00 4:37:17 0:58:44 job.sh wr38
38637 hpc2 rberre2m 48 1 48 60GB 2:19:22:46 3:00:00:00 4:37:14 0:58:47 job.sh wr39
38638 hpc2 rberre2m 48 1 48 60GB 2:19:22:49 3:00:00:00 4:37:11 0:58:50 job.sh wr40
38639 hpc2 rberre2m 48 1 48 60GB 2:19:22:49 3:00:00:00 4:37:11 0:58:50 job.sh wr41

Waiting/Blocked Jobs (2)

Jobs with any problems are highlighted. Check for these jobs whether your resource request can be satisfied by nodes in the queue (most probably not!).

job queue user state #proc #nodes ppn vmem t_req prio enqueued waiting jobname est.hosts
38582 wr13 rberre2m PD 272 1 272 60GB 6:00:00 220 15.12.2018 10:04:30 19:31:31 job13.sh
38583 wr13 rberre2m PD 272 1 272 60GB 6:00:00 220 15.12.2018 10:04:31 19:31:30 job13.sh