Batch Status

Summary

last updated: 20:02:01 15.10.2018

81 active nodes (25 used, 56 free)

4920 cores (1464 used, 3456 free)

22 running jobs, 92832:00:00 remaining core hours

3 waiting jobs, - waiting core hours

Nodes

toggle node display

Running Jobs (22)

color job queue user #proc #nodes ppn vmem t_remain t_req t_used started jobname hosts
19365 wr14 mschen3m 56 1 56 120GB 7:25:45 12:00:00 4:34:15 15:27:46 octa_8192 wr14
19392 hpc dgromm3m 64 1 64 96GB 16:22:21 3:00:00:00 2:07:37:39 13.10.2018 12:24:22 start_mpi.sh wr53
19393 hpc dgromm3m 64 1 64 96GB 16:25:19 3:00:00:00 2:07:34:41 13.10.2018 12:27:20 start_mpi.sh wr54
19409 hpc2 rberre2m 48 1 48 60GB 18:01:17 3:00:00:00 2:05:58:43 13.10.2018 14:03:18 job.sh wr29
19488 hpc lproch3m 256 4 64 120GB 1:02:14:31 1:12:00:00 9:45:29 10:16:32 510_cp_restart wr51,wr52,wr56,wr59
19416 hpc2 rberre2m 48 1 48 60GB 1:12:31:39 3:00:00:00 1:11:28:21 14.10.2018 8:33:40 job.sh wr28
19417 hpc2 rberre2m 48 1 48 60GB 1:12:32:04 3:00:00:00 1:11:27:56 14.10.2018 8:34:05 job.sh wr30
19418 hpc2 rberre2m 48 1 48 60GB 1:12:32:04 3:00:00:00 1:11:27:56 14.10.2018 8:34:05 job.sh wr31
19419 hpc2 rberre2m 48 1 48 60GB 1:12:32:07 3:00:00:00 1:11:27:53 14.10.2018 8:34:08 job.sh wr32
19420 hpc2 rberre2m 48 1 48 60GB 1:12:32:07 3:00:00:00 1:11:27:53 14.10.2018 8:34:08 job.sh wr33
19421 hpc2 rberre2m 48 1 48 60GB 1:12:32:07 3:00:00:00 1:11:27:53 14.10.2018 8:34:08 job.sh wr34
19422 hpc2 rberre2m 48 1 48 60GB 1:12:32:10 3:00:00:00 1:11:27:50 14.10.2018 8:34:11 job.sh wr35
19460 hpc dgromm3m 64 1 64 96GB 1:23:52:07 3:00:00:00 1:00:07:53 14.10.2018 19:54:08 start_mpi.sh wr55
19461 hpc dgromm3m 64 1 64 96GB 1:23:54:06 3:00:00:00 1:00:05:54 14.10.2018 19:56:07 start_mpi.sh wr64
19476 hpc dgromm3m 64 1 64 96GB 2:12:16:52 3:00:00:00 11:43:08 8:18:53 start_mpi.sh wr50
19483 hpc dgromm3m 64 1 64 96GB 2:12:44:30 3:00:00:00 11:15:30 8:46:31 start_mpi.sh wr60
19484 hpc dgromm3m 64 1 64 96GB 2:12:44:49 3:00:00:00 11:15:11 8:46:50 start_mpi.sh wr61
19485 hpc dgromm3m 64 1 64 96GB 2:12:45:08 3:00:00:00 11:14:52 8:47:09 start_mpi.sh wr62
19486 hpc dgromm3m 64 1 64 96GB 2:12:54:35 3:00:00:00 11:05:25 8:56:36 start_mpi.sh wr57
19487 hpc dgromm3m 64 1 64 96GB 2:12:57:02 3:00:00:00 11:02:58 8:59:03 start_mpi.sh wr58
19554 hpc3 koedderm 64 1 64 185GB 2:21:43:25 3:00:00:00 2:16:35 17:45:26 gmx_test_hpc3 wr70
19555 hpc3 koedderm 64 1 64 185GB 2:22:16:25 3:00:00:00 1:43:35 18:18:26 gmx_test_hpc3 wr63

Waiting/Blocked Jobs (3)

Jobs with any problems are highlighted. Check for these jobs whether your resource request can be satisfied by nodes in the queue (most probably not!).

job queue user state #proc #nodes ppn vmem t_req prio enqueued waiting jobname est.hosts
17352 any jlewer3s PD 8 1 8 16GB 3:08:00:00 440 09.10.2018 18:21:56 6:01:40:05 abaqus_slurm.sh
19550 wr14 mschen3m PD 1 1 1 120GB 12:00:00 25 13:34:08 6:27:53 Amber16_Benchmarks_wr14
19551 wr14 mschen3m PD 1 1 1 120GB 12:00:00 25 13:34:19 6:27:42 Amber16_Benchmarks_wr14