Batch Status

Summary

last updated: 22:24:01 12.12.2017

38 active nodes (32 used, 6 free)

1612 cores (620 used, 992 free)

36 running jobs, 15798:24:00 remaining core hours

2 waiting jobs, 2304:00:00 waiting core hours

Nodes

node #cores used by jobs
wr3 272
wr4 96
wr5 56
wr6 12
wr7 8
wr8 48 545
wr10 16 542
wr11 16 540
wr12 16 548
wr13 16 547
wr14 16 546
wr15 16 544
wr16 16 541
wr17 16 543
wr19 16 537
wr20 32 516
wr21 32 414,415
wr22 32 411,413
wr23 32 409,410
wr24 32 407,408
wr25 32 346
wr26 32 345
wr27 32 344
wr28 48 343
wr29 48 26660
wr30 48 342
wr31 48 26659
wr32 48 341
wr33 48 340
wr34 48 339
wr35 48 338
wr36 48 337
wr37 48 336
wr38 48 335
wr39 48
wr40 48 26661
wr41 48 539
wr42 48 44

Running Jobs (36)

color job queue user #proc #nodes ppn vmem t_remain t_req t_used started jobname hosts
537 mpi rberre2m 16 1 16 3GB 3:16 3:01:00 2:57:33 19:26:17 job.sh wr19
545 wr8 rberre2m 48 1 48 6GB 1:02:18 2:05:00 1:02:09 21:21:19 job8.sh wr8
540 mpi rberre2m 16 1 16 2GB 1:08:27 3:01:00 1:51:41 20:31:28 job.sh wr11
541 mpi rberre2m 16 1 16 2GB 1:14:23 3:01:00 1:45:42 20:37:24 job.sh wr16
542 mpi rberre2m 16 1 16 2GB 1:17:46 3:01:00 1:42:33 20:40:47 job.sh wr10
543 mpi rberre2m 16 1 16 2GB 1:44:13 3:01:00 1:15:52 21:07:14 job.sh wr17
544 mpi rberre2m 16 1 16 2GB 1:48:27 3:01:00 1:11:56 21:11:28 job.sh wr15
546 mpi rberre2m 16 1 16 2GB 2:08:46 3:01:00 51:40 21:31:47 job.sh wr14
547 mpi rberre2m 16 1 16 2GB 2:17:11 3:01:00 42:53 21:40:12 job.sh wr13
548 mpi rberre2m 16 1 16 2GB 2:30:16 3:01:00 29:57 21:53:17 job.sh wr12
539 hpc rberre2m 0 1 1 2GB 3:11:29 4:01:00 49:06 21:34:30 job41.sh wr41
335 default kkirsc3m 8 1 8 113GB 3:49:55 18:00:00 14:09:20 8:13:56 01 wr38
336 default kkirsc3m 8 1 8 113GB 3:49:55 18:00:00 14:09:38 8:13:56 02 wr37
337 default kkirsc3m 8 1 8 113GB 3:49:55 18:00:00 14:09:17 8:13:56 03 wr36
338 default kkirsc3m 8 1 8 113GB 3:49:55 18:00:00 14:09:30 8:13:56 04 wr35
339 default kkirsc3m 8 1 8 113GB 3:49:55 18:00:00 14:09:14 8:13:56 05 wr34
340 default kkirsc3m 8 1 8 113GB 3:49:55 18:00:00 14:09:20 8:13:56 06 wr33
341 default kkirsc3m 8 1 8 113GB 3:49:55 18:00:00 14:09:22 8:13:56 07 wr32
342 default kkirsc3m 8 1 8 113GB 3:49:55 18:00:00 14:09:37 8:13:56 08 wr30
343 default kkirsc3m 8 1 8 113GB 3:49:55 18:00:00 14:09:32 8:13:56 09 wr28
344 default kkirsc3m 8 1 8 113GB 3:49:55 18:00:00 14:09:33 8:13:56 10 wr27
345 default kkirsc3m 8 1 8 113GB 3:49:55 18:00:00 14:09:11 8:13:56 11 wr26
346 default kkirsc3m 8 1 8 113GB 3:49:55 18:00:00 14:09:33 8:13:56 12 wr25
26659 hpc2 dgromm3m 48 1 48 29GB 11:18:17 2:00:00:00 1:12:40:51 11.12.2017 9:42:18 start.sh wr31
26660 hpc2 dgromm3m 48 1 48 29GB 11:19:36 2:00:00:00 1:12:39:34 11.12.2017 9:43:37 start.sh wr29
26661 hpc2 dgromm3m 48 1 48 29GB 12:05:31 2:00:00:00 1:11:53:58 11.12.2017 10:29:32 start.sh wr40
516 hpc agaier2m 8 1 8 20GB 21:47:40 1:00:00:00 2:11:43 20:11:41 SA_Neat wr20
407 default tjandt2s 16 1 16 27GB 1:16:04:17 2:00:00:00 7:55:14 14:28:18 eff wr24
408 default tjandt2s 16 1 16 27GB 1:16:04:19 2:00:00:00 7:55:12 14:28:20 eff wr24
409 default tjandt2s 16 1 16 27GB 1:16:04:19 2:00:00:00 7:55:22 14:28:20 eff wr23
410 default tjandt2s 16 1 16 27GB 1:16:04:19 2:00:00:00 7:55:22 14:28:20 eff wr23
411 default tjandt2s 16 1 16 27GB 1:16:04:26 2:00:00:00 7:54:24 14:28:27 no_eff wr22
413 default tjandt2s 16 1 16 27GB 1:16:04:27 2:00:00:00 7:54:23 14:28:28 no_eff wr22
414 default tjandt2s 16 1 16 27GB 1:16:04:30 2:00:00:00 7:54:19 14:28:31 no_eff wr21
415 default tjandt2s 16 1 16 27GB 1:16:04:31 2:00:00:00 7:54:18 14:28:32 no_eff wr21
44 hpc coligs5m 4 1 4 3GB 1:16:15:48 3:00:00:00 1:07:43:28 11.12.2017 14:39:49 jobscript.sh wr42

Waiting/Blocked Jobs (2)

Jobs with any problems are highlighted. Check for these jobs whether your resource request can be satisfied by nodes in the queue (most probably not!).

job queue user state #proc #nodes ppn vmem t_req prio enqueued waiting jobname est.hosts
531 hpc1 marco Q 32 1 32 120GB 3:00:00:00 7186 18:31:10 3:52:51 PSA wr27
73 hpc rberre2m Q 0 1 1 100GB 4:01:00 3305 11.12.2017 14:39:48 1:07:44:13 job42.sh wr42