Batch Status

Summary

last updated: 03:49:01 18.06.2018

38 active nodes (35 used, 3 free)

1612 cores (1368 used, 244 free)

27 running jobs, 33080:24:00 remaining core hours

57 waiting jobs, 18286:40:00 waiting core hours

Nodes

node #cores used by jobs
wr3 272 6639
wr4 96
wr5 56 5739
wr6 12
wr7 8 6635
wr8 48 6981
wr10 16 6978
wr11 16 6924
wr12 16 6878
wr13 16 6894
wr14 16 6893
wr15 16 6923
wr16 16 6925
wr17 16 6891
wr19 16 6979
wr20 32 6621
wr21 32 6623
wr22 32 6622
wr23 32
wr24 32 6912
wr25 32 6912
wr26 32 6869
wr27 32 6869
wr28 48 6930
wr29 48 6930
wr30 48 6416
wr31 48 6415
wr32 48 6933
wr33 48 6933
wr34 48 6931
wr35 48 6931
wr36 48 5757
wr37 48 6441
wr38 48 5757
wr39 48 5757
wr40 48 5757
wr41 48 5738
wr42 48 6980

Running Jobs (27)

color job queue user #proc #nodes ppn vmem t_remain t_req t_used started jobname hosts
6930 hpc2 mweie12s 96 2 48 6GB 32:19 2:00:00 1:26:32 2:21:20 test_GA wr28 wr29
6912 hpc1 mweie12s 64 2 32 6GB 50:15 2:00:00 1:09:05 2:39:16 test_AF wr24 wr25
6931 hpc2 mweie12s 96 2 48 7GB 55:46 2:00:00 1:03:44 2:44:47 test_GA wr34 wr35
6933 hpc2 mweie12s 96 2 48 8GB 57:08 2:00:00 1:02:00 2:46:09 test_GA wr32 wr33
6635 wr7 rberre2m 8 1 8 2GB 1:01:34 6:05:00 5:03:03 17.06.2018 22:45:35 job7.sh wr7
6639 wr3 rberre2m 272 1 272 7GB 1:03:41 6:05:00 5:00:27 17.06.2018 22:47:42 job3.sh wr3
6878 mpi rberre2m 16 1 16 2GB 1:27:39 6:01:00 4:32:55 17.06.2018 23:15:40 job.sh wr12
6980 hpc rberre2m 0 1 1 4GB 2:16:00 4:01:00 1:44:22 2:04:01 job42.sh wr42
6891 mpi rberre2m 16 1 16 2GB 2:37:28 6:01:00 3:23:27 0:25:29 job.sh wr17
6893 mpi rberre2m 16 1 16 2GB 2:37:40 6:01:00 3:22:26 0:25:41 job.sh wr14
6894 mpi rberre2m 16 1 16 2GB 2:38:14 6:01:00 3:22:26 0:26:15 job.sh wr13
6923 mpi rberre2m 16 1 16 3GB 2:55:13 6:01:00 3:05:10 0:43:14 job.sh wr15
6924 mpi rberre2m 16 1 16 3GB 2:55:13 6:01:00 3:04:41 0:43:14 job.sh wr11
6925 mpi rberre2m 16 1 16 2GB 2:55:17 6:01:00 3:04:42 0:43:18 job.sh wr16
6978 mpi rberre2m 16 1 16 2GB 3:10:58 6:01:00 2:48:57 0:58:59 job.sh wr10
6979 mpi rberre2m 16 1 16 3GB 3:56:47 6:01:00 2:03:13 1:44:48 job.sh wr19
6981 wr8 rberre2m 48 1 48 6GB 4:28:30 6:05:00 1:36:08 2:12:31 job8.sh wr8
6869 hpc1 rsharm2s 64 2 32 62GB 5:45:45 11:10:00 5:23:24 17.06.2018 22:24:46 hanuman.sh wr26 wr27
5738 default dgromm3m 48 1 48 60GB 8:46:50 3:00:00:00 2:15:13:03 15.06.2018 12:35:51 start.sh wr41
5739 default dgromm3m 48 1 48 77GB 12:50:51 3:00:00:00 2:11:08:11 15.06.2018 16:39:52 start.sh wr5
6415 hpc2 dgromm3m 48 1 48 45GB 1:16:28:21 3:00:00:00 1:07:30:47 16.06.2018 20:17:22 start.sh wr31
6416 default dgromm3m 48 1 48 44GB 1:19:06:54 3:00:00:00 1:04:52:12 16.06.2018 22:55:55 start.sh wr30
5757 hpc2 lproch3m 96 4 24 22GB 1:20:43:22 2:00:00:00 3:15:47 0:32:23 w_519_init wr36 wr38 wr39 wr40
6441 default dgromm3m 48 1 48 47GB 1:23:51:08 3:00:00:00 1:00:08:02 17.06.2018 3:40:09 start.sh wr37
6621 hpc kkuell3m 32 1 32 5GB 2:08:17:32 3:00:00:00 15:42:12 17.06.2018 12:06:33 wopt_gen_martys_gAds0.360000.sh wr20
6622 hpc kkuell3m 32 1 32 5GB 2:08:17:36 3:00:00:00 15:42:07 17.06.2018 12:06:37 wopt_gen_martys_gAds0.405000.sh wr22
6623 hpc kkuell3m 32 1 32 5GB 2:08:17:46 3:00:00:00 15:42:09 17.06.2018 12:06:47 wopt_gen_martys_gAds0.450000.sh wr21

Waiting/Blocked Jobs (57)

Jobs with any problems are highlighted. Check for these jobs whether your resource request can be satisfied by nodes in the queue (most probably not!).

job queue user state #proc #nodes ppn vmem t_req prio enqueued waiting jobname est.hosts
6913 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7516 0:39:08 3:09:53 test_AF wr24 wr25
6914 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7516 0:39:08 3:09:53 test_AF wr24 wr25
6916 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7516 0:39:08 3:09:53 test_AF wr26 wr27
6917 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7516 0:39:08 3:09:53 test_AF wr24 wr25
6918 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7516 0:39:08 3:09:53 test_AF wr26 wr27
6919 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7516 0:39:08 3:09:53 test_AF wr24 wr25
6920 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7516 0:39:08 3:09:53 test_AF wr26 wr27
6921 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7516 0:39:08 3:09:53 test_AF wr24 wr25
6922 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7516 0:39:08 3:09:53 test_AF wr26 wr27
6915 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7516 0:39:08 3:09:53 test_AF wr24 wr25
6934 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:15 3:04:46 test_GA wr28 wr29
6935 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:15 3:04:46 test_GA wr34 wr35
6936 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:15 3:04:46 test_GA wr32 wr33
6937 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:15 3:04:46 test_GA wr28 wr29
6939 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:20 3:04:41 test_HV wr34 wr35
6940 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:20 3:04:41 test_HV wr32 wr33
6941 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:20 3:04:41 test_HV wr28 wr29
6942 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:20 3:04:41 test_HV wr34 wr35
6943 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:20 3:04:41 test_HV wr32 wr33
6944 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:20 3:04:41 test_HV wr28 wr29
6945 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:20 3:04:41 test_HV wr34 wr35
6946 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:20 3:04:41 test_HV wr32 wr33
6948 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:20 3:04:41 test_HV wr34 wr35
6949 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:20 3:04:41 test_HV wr32 wr33
6950 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:20 3:04:41 test_HV wr28 wr29
6951 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:23 3:04:38 test_NL wr34 wr35
6952 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:23 3:04:38 test_NL wr32 wr33
6954 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:23 3:04:38 test_NL wr34 wr35
6955 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:23 3:04:38 test_NL wr32 wr33
6956 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:23 3:04:38 test_NL wr28 wr29
6957 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:23 3:04:38 test_NL wr34 wr35
6958 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:23 3:04:38 test_NL wr32 wr33
6959 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:23 3:04:38 test_NL wr28 wr29
6960 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:23 3:04:38 test_NL wr34 wr35
6961 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:24 3:04:37 test_NL wr32 wr33
6963 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7510 0:44:27 3:04:34 test_RM wr24 wr25
6964 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7510 0:44:27 3:04:34 test_RM wr26 wr27
6966 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7510 0:44:27 3:04:34 test_RM wr26 wr27
6967 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7510 0:44:27 3:04:34 test_RM wr24 wr25
6968 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7510 0:44:27 3:04:34 test_RM wr26 wr27
6969 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7510 0:44:27 3:04:34 test_RM wr24 wr25
6970 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7510 0:44:27 3:04:34 test_RM wr26 wr27
6962 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:24 3:04:37 test_NL wr28 wr29
6972 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7510 0:44:27 3:04:34 test_RM wr26 wr27
6973 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7510 0:44:27 3:04:34 test_RM wr24 wr25
6974 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7510 0:44:27 3:04:34 test_RM wr26 wr27
6965 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7510 0:44:27 3:04:34 test_RM wr24 wr25
6947 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:20 3:04:41 test_HV wr28 wr29
6971 hpc1 mweie12s Q 64 2 32 100GB 2:00:00 7510 0:44:27 3:04:34 test_RM wr24 wr25
6953 hpc2 mweie12s Q 96 2 48 100GB 2:00:00 7510 0:44:23 3:04:38 test_NL wr28 wr29
6636 wr7 rberre2m Q 8 1 8 6GB 6:05:00 2250 17.06.2018 12:30:26 15:18:35 job7.sh wr7
6637 wr7 rberre2m Q 8 1 8 6GB 6:05:00 2250 17.06.2018 12:30:26 15:18:35 job7.sh wr7
6640 wr3 rberre2m Q 272 1 272 60GB 6:05:00 2245 17.06.2018 12:30:29 15:18:32 job3.sh wr3
6641 wr3 rberre2m Q 272 1 272 60GB 6:05:00 2245 17.06.2018 12:30:30 15:18:31 job3.sh wr3
6066 wr7 amalli2s Q 16 1 16 8GB 2:17:00:00 0 15.06.2018 15:48:11 2:12:00:50 my-first-shell.sh
6977 wr7 amalli2s Q 16 1 16 8GB 2:17:00:00 0 0:56:48 2:52:13 my-first-shell.sh
5758 hpc2 lproch3m H 96 4 24 120GB 2:00:00:00 0 14.06.2018 16:28:37 3:11:20:24 w_520_init