'Dan Skomsky' poodles511@sbcglobal.net [hercules-390]
2017-01-31 00:41:26 UTC
After all the discussion of z/OS benchmarks, I decided to see how a simple
change to the .CNF file would affect my Hyperion stress test numbers. Per
the information previously pointed out by Fish, I increased my DEVTMAX value
from 8 to 16, giving me eight more device threads.. As before, I ran my ten
(10) full NBENCH Assemble, Link, and Go JOBS. Absolutely nothing else has
changed, nothing whatsoever. The average wall clock time per JOB is now
6.93 minutes. That's a decrease of 1.40 minutes, or 16.80%. That's a
significant number. Here are my HERCGUI log numbers:
17:02:26.928 00000D8C HHC02272I From Sun Jan 29 17:02:26 2017 to Mon Jan 30
17:02:26 2017
17:02:26.928 00000D8C HHC02272I MIPS: 910.316468
17:02:26.928 00000D8C HHC02272I IO/s: 2900
17:02:26.928 00000D8C HHC02272I Current interval is 1440 minutes
The MIPS number went up by 98.168611, or 12.09%. Again, that's significant.
But IOs per second decreased by 336 or 10.38%.
Again, absolutely nothing changed in my stress test except for doubling the
DEVTMAX value. Who would have thought such a small change would have
produced such a significant performance increase?
In the near future I will be updating the operating system to Windows Server
2012 R2. When everything is stable once more, I'll re-run my stress test.
From: hercules-***@yahoogroups.com [mailto:hercules-***@yahoogroups.com]
Sent: Sunday, September 04, 2016 12:55 PM
To: hercules-***@yahoogroups.com
Subject: RE: [hercules-390] Stress Testing Hyperion under Windows Server.
Well, I updated the BIOS and chipset microcode on my Dell box to the latest
and greatest levels. I then continued with my stress testing. This time I
submitted ten copies of NBENCH (with unique JOB names of course). Eight
immediately jumped into my free initiators and ran to completion. As the
first two NBENCH jobs completed, the remaining two pending jobs jumped into
the free initiators. All NBENCH jobs were full Assemble, link, and go
examples. For those who aren't familiar with full NBENCH, each JOB consists
of 35,000+ lines of Assembler source, executes 43 program steps, and
produces 72,000+ lines of SYSOUT. The average wall clock time per JOB was
8.33 minutes.
I decided this method of testing would maximize the mix of I/O and CPU usage
and give me a more realistic measurement of true machine performance. Below
are my HercGui numbers. The CPU number is slightly lower than before, but
now notice the I/O number.
12:06:19.242 00001344 HHC02272I From Sat Sep 03 12:06:19 2016 to Sun Sep 04
12:06:19 2016
12:06:19.242 00001344 HHC02272I MIPS: 812.147857
12:06:19.242 00001344 HHC02272I IO/s: 3236
12:06:19.242 00001344 HHC02272I Current interval is 1440 minutes
My gut feeling is that the high I/O capability can be attributed entirely to
the SSD HDDs. Per Windows Task Manager, my memory usage never went over 22%
(of the 16 GB installed).
For the record, the processor is an Intel Xeon CPU Quad Core E3-1225 v3 @
3.20GHz, RAM is Hyundai PC3-12800 (800 MHz), and the HDDs are LITEONIT LCTs,
SATA-III 6.0Gb/s. All components were acquired from Dell.
From: hercules-***@yahoogroups.com [mailto:hercules-***@yahoogroups.com]
Sent: Friday, September 02, 2016 4:19 PM
To: hercules-***@yahoogroups.com
Subject: RE: [hercules-390] Stress Testing Hyperion under Windows Server.
It looks like my tests from yesterday produced nearly identical results
today. But this time, I skipped running the Assembly and link steps and
simply ran the NBENCH steps in parallel. As before, with eight concurrent
NBENCH steps executing, all HercGui and Task Manager indicators stayed maxed
out at 100%. As you can see, without running the Assembly and link steps my
I/O numbers dropped like a stone.
15:57:33.967 00000450 HHC02272I From Thu Sep 01 15:57:33 2016 to Fri Sep 02
15:57:33 2016
15:57:33.967 00000450 HHC02272I MIPS: 899.872161
15:57:33.967 00000450 HHC02272I IO/s: 527
15:57:33.967 00000450 HHC02272I Current int! erval is 1440 minutes
My Windows Server is at the current Microsoft update level. Over this
weekend I plan to apply some Dell BIOS and chipset updates to the T20 server
and bring it up to current level also. I will then rerun my tests. Who
knows? Maybe the numbers will change. I'll be pushing for that magic 900
MIPS level.
-----Original Message-----
From: hercules-***@yahoogroups.com [mailto:hercules-***@yahoogroups.com]
Sent: Friday, September 02, 2016 7:31 AM
To: hercules-***@yahoogroups.com
Subject: Re: [hercules-390] Stress Testing Hyperion under Windows Server.
Ivan,
Your TGV reservation with SNCF also wind up in TPF aka Airline Control
Program. TPF used to be maintained from VM, but now MVS is required and
those who relied on Pipelines under CMS had a problem.
When I did some performance measurements along the lines you suggest in
conjunction with what I thought should improve the code, my experience that
was that there was no correlation between what code I changed in Hercules
and where performance changed.
My theory is that Hercules performance is governed by cache misses because
the footprint of running even the simplest instruction is enormous. So
pushing a bit of code across a cache line might have a lot more effect than
saving a single x85 instruction.
it is mainly used in the airline industry to provide centralized
booking systems).
Now as far as "performance" is concerned, I once ran an (automated)
daily test on various instructions to see if any change might have had
significant impact on instruction execution times.
It was an IPLable test that ran various instructions in long unrolled
loops (within a page) and that punched the results. The punch cards
were then incorporated in a DB and could be then processed by a PHP
program and then viewed in a web browser. It also relied on the timer
facilities (TOD clock). I had this service up for about 3 years.
- There was obviously some sort of "heisencache" issue... I could
never get very consistant results (from 1 day to the next I would have
sometimes 10% difference although no change had been made), possible
because a subtle difference of host instruction execution sequence, or
some background process running
- I only ran tests on a limited set of instructions (mainly loads
(L,LR,LA,LM,IC,ICM) , stores (ST, STC, STCM, STM), Branches, Moves
(MVC)) and didn't test DAT (but in hercules, DAT and Real is almost
the
same)
However, it allowed detecting if some seemingly minor change could
have had some major impact through some weird side effect.
--Ivan
change to the .CNF file would affect my Hyperion stress test numbers. Per
the information previously pointed out by Fish, I increased my DEVTMAX value
from 8 to 16, giving me eight more device threads.. As before, I ran my ten
(10) full NBENCH Assemble, Link, and Go JOBS. Absolutely nothing else has
changed, nothing whatsoever. The average wall clock time per JOB is now
6.93 minutes. That's a decrease of 1.40 minutes, or 16.80%. That's a
significant number. Here are my HERCGUI log numbers:
17:02:26.928 00000D8C HHC02272I From Sun Jan 29 17:02:26 2017 to Mon Jan 30
17:02:26 2017
17:02:26.928 00000D8C HHC02272I MIPS: 910.316468
17:02:26.928 00000D8C HHC02272I IO/s: 2900
17:02:26.928 00000D8C HHC02272I Current interval is 1440 minutes
The MIPS number went up by 98.168611, or 12.09%. Again, that's significant.
But IOs per second decreased by 336 or 10.38%.
Again, absolutely nothing changed in my stress test except for doubling the
DEVTMAX value. Who would have thought such a small change would have
produced such a significant performance increase?
In the near future I will be updating the operating system to Windows Server
2012 R2. When everything is stable once more, I'll re-run my stress test.
From: hercules-***@yahoogroups.com [mailto:hercules-***@yahoogroups.com]
Sent: Sunday, September 04, 2016 12:55 PM
To: hercules-***@yahoogroups.com
Subject: RE: [hercules-390] Stress Testing Hyperion under Windows Server.
Well, I updated the BIOS and chipset microcode on my Dell box to the latest
and greatest levels. I then continued with my stress testing. This time I
submitted ten copies of NBENCH (with unique JOB names of course). Eight
immediately jumped into my free initiators and ran to completion. As the
first two NBENCH jobs completed, the remaining two pending jobs jumped into
the free initiators. All NBENCH jobs were full Assemble, link, and go
examples. For those who aren't familiar with full NBENCH, each JOB consists
of 35,000+ lines of Assembler source, executes 43 program steps, and
produces 72,000+ lines of SYSOUT. The average wall clock time per JOB was
8.33 minutes.
I decided this method of testing would maximize the mix of I/O and CPU usage
and give me a more realistic measurement of true machine performance. Below
are my HercGui numbers. The CPU number is slightly lower than before, but
now notice the I/O number.
12:06:19.242 00001344 HHC02272I From Sat Sep 03 12:06:19 2016 to Sun Sep 04
12:06:19 2016
12:06:19.242 00001344 HHC02272I MIPS: 812.147857
12:06:19.242 00001344 HHC02272I IO/s: 3236
12:06:19.242 00001344 HHC02272I Current interval is 1440 minutes
My gut feeling is that the high I/O capability can be attributed entirely to
the SSD HDDs. Per Windows Task Manager, my memory usage never went over 22%
(of the 16 GB installed).
For the record, the processor is an Intel Xeon CPU Quad Core E3-1225 v3 @
3.20GHz, RAM is Hyundai PC3-12800 (800 MHz), and the HDDs are LITEONIT LCTs,
SATA-III 6.0Gb/s. All components were acquired from Dell.
From: hercules-***@yahoogroups.com [mailto:hercules-***@yahoogroups.com]
Sent: Friday, September 02, 2016 4:19 PM
To: hercules-***@yahoogroups.com
Subject: RE: [hercules-390] Stress Testing Hyperion under Windows Server.
It looks like my tests from yesterday produced nearly identical results
today. But this time, I skipped running the Assembly and link steps and
simply ran the NBENCH steps in parallel. As before, with eight concurrent
NBENCH steps executing, all HercGui and Task Manager indicators stayed maxed
out at 100%. As you can see, without running the Assembly and link steps my
I/O numbers dropped like a stone.
15:57:33.967 00000450 HHC02272I From Thu Sep 01 15:57:33 2016 to Fri Sep 02
15:57:33 2016
15:57:33.967 00000450 HHC02272I MIPS: 899.872161
15:57:33.967 00000450 HHC02272I IO/s: 527
15:57:33.967 00000450 HHC02272I Current int! erval is 1440 minutes
My Windows Server is at the current Microsoft update level. Over this
weekend I plan to apply some Dell BIOS and chipset updates to the T20 server
and bring it up to current level also. I will then rerun my tests. Who
knows? Maybe the numbers will change. I'll be pushing for that magic 900
MIPS level.
-----Original Message-----
From: hercules-***@yahoogroups.com [mailto:hercules-***@yahoogroups.com]
Sent: Friday, September 02, 2016 7:31 AM
To: hercules-***@yahoogroups.com
Subject: Re: [hercules-390] Stress Testing Hyperion under Windows Server.
Ivan,
Your TGV reservation with SNCF also wind up in TPF aka Airline Control
Program. TPF used to be maintained from VM, but now MVS is required and
those who relied on Pipelines under CMS had a problem.
When I did some performance measurements along the lines you suggest in
conjunction with what I thought should improve the code, my experience that
was that there was no correlation between what code I changed in Hercules
and where performance changed.
My theory is that Hercules performance is governed by cache misses because
the footprint of running even the simplest instruction is enormous. So
pushing a bit of code across a cache line might have a lot more effect than
saving a single x85 instruction.
Paul, out in the real world there would at least be a data base
involved. If it is a transaction processor, you will at least have
CICS or equivalent, if not (shudder) Websphere.
There! is also TPF (z/TPF).. But I never run it or saw it run (I thinkinvolved. If it is a transaction processor, you will at least have
CICS or equivalent, if not (shudder) Websphere.
it is mainly used in the airline industry to provide centralized
booking systems).
Now as far as "performance" is concerned, I once ran an (automated)
daily test on various instructions to see if any change might have had
significant impact on instruction execution times.
It was an IPLable test that ran various instructions in long unrolled
loops (within a page) and that punched the results. The punch cards
were then incorporated in a DB and could be then processed by a PHP
program and then viewed in a web browser. It also relied on the timer
facilities (TOD clock). I had this service up for about 3 years.
- There was obviously some sort of "heisencache" issue... I could
never get very consistant results (from 1 day to the next I would have
sometimes 10% difference although no change had been made), possible
because a subtle difference of host instruction execution sequence, or
some background process running
- I only ran tests on a limited set of instructions (mainly loads
(L,LR,LA,LM,IC,ICM) , stores (ST, STC, STCM, STM), Branches, Moves
(MVC)) and didn't test DAT (but in hercules, DAT and Real is almost
the
same)
However, it allowed detecting if some seemingly minor change could
have had some major impact through some weird side effect.
--Ivan