Batch Status

Summary

last updated: 15:01:01 19.08.2018

57 active nodes (1 used, 56 free)

3880 cores (272 used, 3608 free)

1 running jobs, 1632:00:00 remaining core hours

1 waiting jobs, 1632:00:00 waiting core hours

Nodes

node S #cores used by jobs
wr13 272 2026
wr14 56
wr15 64
wr16 64
wr17 64
wr18 64
wr19 64
wr43 96
wr50 64
wr51 64
wr52 64
wr53 64
wr54 64
wr55 64
wr56 64
wr57 64
wr58 64
wr59 64
wr60 64
wr61 64
wr62 64
wr63 64
wr64 64
wr65 64
wr66 64
wr67 64
wr68 64
wr69 64
wr70 64
wr71 64
wr72 64
wr73 64
wr74 64
wr75 64
wr76 64
wr77 64
wr78 64
wr79 64
wr80 64
wr81 64
wr82 64
wr83 64
wr84 64
wr85 64
wr86 64
wr87 64
wr88 64
wr89 64
wr90 64
wr91 64
wr92 64
wr93 64
wr94 64
wr95 64
wr96 64
wr97 64
wr98 64
wr99 64

Running Jobs (1)

color job queue user #proc #nodes ppn vmem t_remain t_req t_used started jobname hosts
2026 wr13 rberre2m 272 1 272 60GB 4:14:26 6:00:00 1:45:34 13:15:27 job13.sh wr13

Waiting/Blocked Jobs (1)

Jobs with any problems are highlighted. Check for these jobs whether your resource request can be satisfied by nodes in the queue (most probably not!).

job queue user state #proc #nodes ppn vmem t_req prio enqueued waiting jobname est.hosts
2027 wr13 rberre2m PD 272 1 272 60GB 6:00:00 43 17.08.2018 16:36:04 1:22:24:57 job13.sh