Discussion:
Stress Testing Hyperion under Windows Server.
(too old to reply)
poodles511@sbcglobal.net [hercules-390]
2016-09-01 21:49:14 UTC
Permalink
I just finished stressing Hyperion with z/OS. Everything is installed on a dedicated Dell Power Edge T20 server using a quad core 3.2 GHz Xeon processor with 16GB RAM. The machine is running Windows Server 2008 R2. Windows Server, Hyperion, and z/OS are all installed on SSD HDDs.


For my test, I decided to assemble, link, and execute eight concurrent MIPS test routines I found on CBT. With all eight initiators occupied, the four HercGui processor utilization indicators along with the Windows Task Manager showed the Dell box was at 100% CPU utilization.


After my test I decide to check the HercGui log. It showed:
15:57:33.504 00000450 HHC02272I From Wed Aug 31 15:57:33 2016 to Thu Sep 01 15:57:33 2016
15:57:33.504 00000450 HHC02272I MIPS: 883.824245
15:57:33.504 00000450 HHC02272I IO/s: 1732
15:57:33.504 00000450 HHC02272I Current interval is 1440 minutes


I was quite surprised by this level of performance.
Vince Coen vbcoen@gmail.com [hercules-390]
2016-09-01 23:43:37 UTC
Permalink
Why, do you think Windows is efficient with low overheads ?

Now do the same under Linux - it will be a lot better and without 100% util.
Post by ***@sbcglobal.net [hercules-390]
I just finished stressing Hyperion with z/OS. Everything is installed
on a dedicated Dell Power Edge T20 server using a quad core 3.2 GHz
Xeon processor with 16GB RAM. The machine is running Windows Server
2008 R2. Windows Server, Hyperion, and z/OS are all installed on SSD
HDDs.
For my test, I decided to assemble, link, and execute eight concurrent
MIPS test routines I found on CBT. With all eight initiators
occupied, the four HercGui processor utilization indicators along with
the Windows Task Manager showed the Dell box was at 100% CPU utilization.
15:57:33.504 00000450 HHC02272I From Wed Aug 31 15:57:33 2016 to Thu Sep 01 15:57:33 2016
15:57:33.504 00000450 HHC02272I MIPS: 883.824245
15:57:33.504 00000450 HHC02272I IO/s: 1732
15:57:33.504 00000450 HHC02272I Current interval is 1440 minutes
I was quite surprised by this level of performance.
------------------------------------------------------------------------
'Dan Skomsky' poodles511@sbcglobal.net [hercules-390]
2016-09-02 00:05:16 UTC
Permalink
No, that’s not the point whatsoever. It’s all about bang per buck. Nothing more, nothing less.



From: hercules-***@yahoogroups.com [mailto:hercules-***@yahoogroups.com]
Sent: Thursday, September 01, 2016 6:44 PM
To: hercules-***@yahoogroups.com
Subject: Re: [hercules-390] Stress Testing Hyperion under Windows Server.





Why, do you think Windows is efficient with low overheads ?

Now do the same under Linux - it will be a lot better and without 100% util.
I just finished stressing Hyperion with z/OS. Everything is installed
on a dedicated Dell Power Edge T20 server using a quad core 3.2 GHz
Xeon processor with 16GB RAM. The machine is running Windows Server
2008 R2. Windows Server, Hyperion, and z/OS are all installed on SSD
HDDs.
For my test, I decided to assemble, link, and execute eight concurrent
MIPS test routines I found on CBT. With all eight initiators
occupied, the four HercGui processor utilization indicators along with
the Windows Task Manager showed the Dell box was at 100% CPU utilization.
15:57:33.504 00000450 HHC02272I From Wed Aug 31 15:57:33 2016 to Thu Sep 01 15:57:33 2016
15:57:33.504 00000450 HHC02272I MIPS: 883.824245
15:57:33.504 00000450 HHC02272I IO/s: 1732
15:57:33.504 00000450 HHC02272I Current interval is 1440 minutes
I was quite surprised by this level of performance.
----------------------------------------------------------
'Dave G4UGM' dave.g4ugm@gmail.com [hercules-390]
2016-09-02 07:56:33 UTC
Permalink
-----Original Message-----
Sent: 02 September 2016 00:44
Subject: Re: [hercules-390] Stress Testing Hyperion under Windows Server.
Why, do you think Windows is efficient with low overheads ?
Now do the same under Linux - it will be a lot better and without 100% util.
If you are not getting 100% CPU util then you have an i/o bottle neck....

Its been a while since I did comparative testing but in fact there is usually only a small difference between the performance on Windows and Linux and for some releases Windows is faster.
Modern Linux is also getting bloated. Rather than spout advertising, please post verified facts.

Dave
Post by ***@sbcglobal.net [hercules-390]
I just finished stressing Hyperion with z/OS. Everything is installed
on a dedicated Dell Power Edge T20 server using a quad core 3.2 GHz
Xeon processor with 16GB RAM. The machine is running Windows Server
2008 R2. Windows Server, Hyperion, and z/OS are all installed on SSD
HDDs.
For my test, I decided to assemble, link, and execute eight concurrent
MIPS test routines I found on CBT. With all eight initiators occupied,
the four HercGui processor utilization indicators along with the
Windows Task Manager showed the Dell box was at 100% CPU utilization.
15:57:33.504 00000450 HHC02272I From Wed Aug 31 15:57:33 2016 to Thu
Sep 01 15:57:33 2016
15:57:33.504 00000450 HHC02272I MIPS: 883.824245
15:57:33.504 00000450 HHC02272I IO/s: 1732
15:57:33.504 00000450 HHC02272I Current interval is 1440 minutes
I was quite surprised by this level of performance.
----------------------------------------------------------------------
--
------------------------------------
------------------------------------
http://groups.yahoo.com/group/hercules-390
http://www.hercules-390.org
------------------------------------
Yahoo Groups Links
kerravon86@yahoo.com.au [hercules-390]
2016-09-02 08:17:50 UTC
Permalink
Post by 'Dave G4UGM' ***@gmail.com [hercules-390]
Post by Vince Coen ***@gmail.com [hercules-390]
Why, do you think Windows is efficient with low overheads ?
Now do the same under Linux - it will be a lot better and without 100% util.
If you are not getting 100% CPU util then you have an i/o bottle neck....
Exactly. Unless you have a very simple
program like "copy", your application
should always be CPU-bound and
doing 100% CPU, and the Windows/
Linux overhead should be minimal
and irrelevant. Operating systems
should not be the bottleneck. The
bottleneck should always be in your
application.

BFN. Paul.
'Dave G4UGM' dave.g4ugm@gmail.com [hercules-390]
2016-09-02 08:28:06 UTC
Permalink
-----Original Message-----
Sent: 02 September 2016 09:18
Subject: RE: [hercules-390] Stress Testing Hyperion under Windows Server.
Post by 'Dave G4UGM' ***@gmail.com [hercules-390]
Post by Vince Coen ***@gmail.com [hercules-390]
Why, do you think Windows is efficient with low overheads ?
Now do the same under Linux - it will be a lot better and without 100%
util.
Post by 'Dave G4UGM' ***@gmail.com [hercules-390]
If you are not getting 100% CPU util then you have an i/o bottle neck....
Exactly. Unless you have a very simple
program like "copy", your application
should always be CPU-bound and
doing 100% CPU, and the Windows/
Linux overhead should be minimal
and irrelevant. Operating systems
should not be the bottleneck. The
bottleneck should always be in your
application.
BFN. Paul.
Praise a deity of your choosing. We agree on something!

Dave
G4UGM
------------------------------------
------------------------------------
kerravon86@yahoo.com.au [hercules-390]
2016-09-02 08:54:18 UTC
Permalink
Post by 'Dave G4UGM' ***@gmail.com [hercules-390]
Post by ***@yahoo.com.au [hercules-390]
Post by 'Dave G4UGM' ***@gmail.com [hercules-390]
If you are not getting 100% CPU util then you have an i/o bottle neck....
Exactly.
Praise a deity of your choosing. We agree on something!
I thought we agreed on most things.

It's rare/non-existent for me to have a
long argument with you.

BFN. Paul.
'Dan Skomsky' poodles511@sbcglobal.net [hercules-390]
2016-09-02 09:41:45 UTC
Permalink
The T20 server with Windows Server 2008 R2 was on sale from Dell a couple years ago for $300. The 300GB HDDs are selling for $30 each from Dell on eBay. So for $400 you can easily get an 800+ MIPS box up and running. All components, including software, was totally Plug-N-Play. The Assemble, link, and go program(s) I kept running was ‘NBENCH’. Even with the box running 100%, I was getting good response time on Windows Remote Desktop, pseudo printing using HercPrt, remote job submission from SPFLite, and remote console usage using SNACONS.



From: hercules-***@yahoogroups.com [mailto:hercules-***@yahoogroups.com]
Sent: Friday, September 02, 2016 3:54 AM
To: hercules-***@yahoogroups.com
Subject: RE: [hercules-390] Stress Testing Hyperion under Windows Server.
Post by 'Dave G4UGM' ***@gmail.com [hercules-390]
Post by ***@yahoo.com.au [hercules-390]
Post by 'Dave G4UGM' ***@gmail.com [hercules-390]
If you are not getting 100% CPU util then you have an i/o bottle neck....
Exactly.
Praise a deity of your choosing. We agree on something!
I thought we agreed on most things.

It's rare/non-existent for me to have a
long argument with you.

BFN. Paul.
quatras.design@yahoo.com [hercules-390]
2016-09-23 18:28:55 UTC
Permalink
Dan,


As someone involved with the SPFLite developer, I am pleased to see you are using it for job submission to Hercules. We worked pretty hard to make SPFLite a useful tool for Hercules users. SPFLite works well with HercPrt if you set it up to browse SYSOUT files that are defined in SPFlite as EOL AUTO and PAGE ON. You can also set "green bar" colorization of groups of three lines. On a large enough monitor and the right font, you can fit a whole SYSOUT page, and the Page Up/Down keys move in units of whole pages. It looks just the real thing, as if you were leafing through actual green bar paper. SPFLite comes supplied with a font (Raster) that allows you to fit enough lines for a whole page. On my monitor, I was able to get about 60 lines with Raster14, and 66 lines with Raster13.


See SPFLite Web Site http://www.spflite.com for more information.


SPFLite Web Site http://www.spflite.com 



View on www.spflite.com http://www.spflite.com
Preview by Yahoo




Robert Hodge

'John P. Hartmann' jphartmann@gmail.com [hercules-390]
2016-09-02 10:02:27 UTC
Permalink
Paul, out in the real world there would at least be a data base
involved. If it is a transaction processor, you will at least have CICS
or equivalent, if not (shudder) Websphere.

In that context the logic of the application programs is irrelevant.
Tell me how many database accesses you do and I'll tell you what your
performance is.

In the websphere world it is now becoming ludicrous, if it were not that
it will make any oldtimer weep. Write one line of application code; it
generates two million lines of Java, which then after a while gets
jit-ed and so on. Not to mention the garbage collector.

The situation with Hercules is different, since we do not really care
much about the application program (unless you consider Hercules an
application program, of course).

We have

host->hercules->operating system->[sie->operating system]->application
program

Where are you measuring performance?
Post by ***@yahoo.com.au [hercules-390]
Operating systems
should not be the bottleneck. The
bottleneck should always be in your
application.
Ivan Warren ivan@vmfacility.fr [hercules-390]
2016-09-02 10:59:42 UTC
Permalink
Post by 'John P. Hartmann' ***@gmail.com [hercules-390]
Paul, out in the real world there would at least be a data base
involved. If it is a transaction processor, you will at least have CICS
or equivalent, if not (shudder) Websphere.
There is also TPF (z/TPF).. But I never run it or saw it run (I think it
is mainly used in the airline industry to provide centralized booking
systems).

Now as far as "performance" is concerned, I once ran an (automated)
daily test on various instructions to see if any change might have had
significant impact on instruction execution times.

It was an IPLable test that ran various instructions in long unrolled
loops (within a page) and that punched the results. The punch cards were
then incorporated in a DB and could be then processed by a PHP program
and then viewed in a web browser. It also relied on the timer facilities
(TOD clock). I had this service up for about 3 years.

The issue was that I had two main issues :

- There was obviously some sort of "heisencache" issue... I could never
get very consistant results (from 1 day to the next I would have
sometimes 10% difference although no change had been made), possible
because a subtle difference of host instruction execution sequence, or
some background process running
- I only ran tests on a limited set of instructions (mainly loads
(L,LR,LA,LM,IC,ICM) , stores (ST, STC, STCM, STM), Branches, Moves
(MVC)) and didn't test DAT (but in hercules, DAT and Real is almost the
same)

However, it allowed detecting if some seemingly minor change could have
had some major impact through some weird side effect.

--Ivan



[Non-text portions of this message have been removed]
W Mainframe mainframew@yahoo.com [hercules-390]
2016-09-02 12:00:59 UTC
Permalink
Some months ago I had an opportunity to run a Disaster Recover sponsored by a customer. They were very excited with Hercules emulator and maybe in Hercules results. 
My customer is a bureau that process different kind of information, such as payment check (HR), accountability and provide online applications for more than 300 users. In some moment of month the number is up to 450 users. Our production has three IBM Multiprise H50, max internal dasd and only one storage. We don't have more than 350 MIPS. Most of our program products are home grown.
For my "fake DR" I used a IBM Power 740 running a RedHat OpenClient LPAR, running six core (3,6GHz), 32G RAM and Hercules Hyperion. In my real world we have an old VM/ESA running two VSE/ESA 2.5 and an OS/390 v2.8 (DB2 v6 and CICS/TS 1.3) guests. Of course I ran VSEs and OS/390 out of virtual machine. (but who knows in a future, all OSes running together.. -:) )I was not concerned with MIPS values in hercules console, but with my BATCH/ONLINE windows time instead of.
After a weekend running ONLINE and BATCH process (under OS390), we faced an awesome delay about 33 minutes. From Friday starting at 6pm to Monday 4:30am : more than 1000 BATCH program executed, 100 PCOM sessions running macros to simulate CICS transactions (1000 transaction per hour). 300 FTP clients uploading/downloading huge files, in different moments. I've collected many information about our DR and now I am trying to understand the numbers (DB2 x VSAM, CICS, Sort...). 
In general my user applications are written in COBOL and Assembler. I didn't consider my development team working at same time.
How sounds for you guys?
Dan

Em Sexta-feira, 2 de Setembro de 2016 7:59, "Ivan Warren ***@vmfacility.fr [hercules-390]" <hercules-***@yahoogroups.com> escreveu:


 
Post by 'John P. Hartmann' ***@gmail.com [hercules-390]
Paul, out in the real world there would at least be a data base
involved. If it is a transaction processor, you will at least have CICS
or equivalent, if not (shudder) Websphere.
There is also TPF (z/TPF).. But I never run it or saw it run (I think it
is mainly used in the airline industry to provide centralized booking
systems).

Now as far as "performance" is concerned, I once ran an (automated)
daily test on various instructions to see if any change might have had
significant impact on instruction execution times.

It was an IPLable test that ran various instructions in long unrolled
loops (within a page) and that punched the results. The punch cards were
then incorporated in a DB and could be then processed by a PHP program
and then viewed in a web browser. It also relied on the timer facilities
(TOD clock). I had this service up for about 3 years.

The issue was that I had two main issues :

- There was obviously some sort of "heisencache" issue... I could never
get very consistant results (from 1 day to the next I would have
sometimes 10% difference although no change had been made), possible
because a subtle difference of host instruction execution sequence, or
some background process running
- I only ran tests on a limited set of instructions (mainly loads
(L,LR,LA,LM,IC,ICM) , stores (ST, STC, STCM, STM), Branches, Moves
(MVC)) and didn't test DAT (but in hercules, DAT and Real is almost the
same)

However, it allowed detecting if some seemingly minor change could have
had some major impact through some weird side effect.

--Ivan

[Non-text portions of this message have been removed]

#yiv9749173710 #yiv9749173710 -- #yiv9749173710ygrp-mkp {border:1px solid #d8d8d8;font-family:Arial;margin:10px 0;padding:0 10px;}#yiv9749173710 #yiv9749173710ygrp-mkp hr {border:1px solid #d8d8d8;}#yiv9749173710 #yiv9749173710ygrp-mkp #yiv9749173710hd {color:#628c2a;font-size:85%;font-weight:700;line-height:122%;margin:10px 0;}#yiv9749173710 #yiv9749173710ygrp-mkp #yiv9749173710ads {margin-bottom:10px;}#yiv9749173710 #yiv9749173710ygrp-mkp .yiv9749173710ad {padding:0 0;}#yiv9749173710 #yiv9749173710ygrp-mkp .yiv9749173710ad p {margin:0;}#yiv9749173710 #yiv9749173710ygrp-mkp .yiv9749173710ad a {color:#0000ff;text-decoration:none;}#yiv9749173710 #yiv9749173710ygrp-sponsor #yiv9749173710ygrp-lc {font-family:Arial;}#yiv9749173710 #yiv9749173710ygrp-sponsor #yiv9749173710ygrp-lc #yiv9749173710hd {margin:10px 0px;font-weight:700;font-size:78%;line-height:122%;}#yiv9749173710 #yiv9749173710ygrp-sponsor #yiv9749173710ygrp-lc .yiv9749173710ad {margin-bottom:10px;padding:0 0;}#yiv9749173710 #yiv9749173710actions {font-family:Verdana;font-size:11px;padding:10px 0;}#yiv9749173710 #yiv9749173710activity {background-color:#e0ecee;float:left;font-family:Verdana;font-size:10px;padding:10px;}#yiv9749173710 #yiv9749173710activity span {font-weight:700;}#yiv9749173710 #yiv9749173710activity span:first-child {text-transform:uppercase;}#yiv9749173710 #yiv9749173710activity span a {color:#5085b6;text-decoration:none;}#yiv9749173710 #yiv9749173710activity span span {color:#ff7900;}#yiv9749173710 #yiv9749173710activity span .yiv9749173710underline {text-decoration:underline;}#yiv9749173710 .yiv9749173710attach {clear:both;display:table;font-family:Arial;font-size:12px;padding:10px 0;width:400px;}#yiv9749173710 .yiv9749173710attach div a {text-decoration:none;}#yiv9749173710 .yiv9749173710attach img {border:none;padding-right:5px;}#yiv9749173710 .yiv9749173710attach label {display:block;margin-bottom:5px;}#yiv9749173710 .yiv9749173710attach label a {text-decoration:none;}#yiv9749173710 blockquote {margin:0 0 0 4px;}#yiv9749173710 .yiv9749173710bold {font-family:Arial;font-size:13px;font-weight:700;}#yiv9749173710 .yiv9749173710bold a {text-decoration:none;}#yiv9749173710 dd.yiv9749173710last p a {font-family:Verdana;font-weight:700;}#yiv9749173710 dd.yiv9749173710last p span {margin-right:10px;font-family:Verdana;font-weight:700;}#yiv9749173710 dd.yiv9749173710last p span.yiv9749173710yshortcuts {margin-right:0;}#yiv9749173710 div.yiv9749173710attach-table div div a {text-decoration:none;}#yiv9749173710 div.yiv9749173710attach-table {width:400px;}#yiv9749173710 div.yiv9749173710file-title a, #yiv9749173710 div.yiv9749173710file-title a:active, #yiv9749173710 div.yiv9749173710file-title a:hover, #yiv9749173710 div.yiv9749173710file-title a:visited {text-decoration:none;}#yiv9749173710 div.yiv9749173710photo-title a, #yiv9749173710 div.yiv9749173710photo-title a:active, #yiv9749173710 div.yiv9749173710photo-title a:hover, #yiv9749173710 div.yiv9749173710photo-title a:visited {text-decoration:none;}#yiv9749173710 div#yiv9749173710ygrp-mlmsg #yiv9749173710ygrp-msg p a span.yiv9749173710yshortcuts {font-family:Verdana;font-size:10px;font-weight:normal;}#yiv9749173710 .yiv9749173710green {color:#628c2a;}#yiv9749173710 .yiv9749173710MsoNormal {margin:0 0 0 0;}#yiv9749173710 o {font-size:0;}#yiv9749173710 #yiv9749173710photos div {float:left;width:72px;}#yiv9749173710 #yiv9749173710photos div div {border:1px solid #666666;min-height:62px;overflow:hidden;width:62px;}#yiv9749173710 #yiv9749173710photos div label {color:#666666;font-size:10px;overflow:hidden;text-align:center;white-space:nowrap;width:64px;}#yiv9749173710 #yiv9749173710reco-category {font-size:77%;}#yiv9749173710 #yiv9749173710reco-desc {font-size:77%;}#yiv9749173710 .yiv9749173710replbq {margin:4px;}#yiv9749173710 #yiv9749173710ygrp-actbar div a:first-child {margin-right:2px;padding-right:5px;}#yiv9749173710 #yiv9749173710ygrp-mlmsg {font-size:13px;font-family:Arial, helvetica, clean, sans-serif;}#yiv9749173710 #yiv9749173710ygrp-mlmsg table {font-size:inherit;font:100%;}#yiv9749173710 #yiv9749173710ygrp-mlmsg select, #yiv9749173710 input, #yiv9749173710 textarea {font:99% Arial, Helvetica, clean, sans-serif;}#yiv9749173710 #yiv9749173710ygrp-mlmsg pre, #yiv9749173710 code {font:115% monospace;}#yiv9749173710 #yiv9749173710ygrp-mlmsg * {line-height:1.22em;}#yiv9749173710 #yiv9749173710ygrp-mlmsg #yiv9749173710logo {padding-bottom:10px;}#yiv9749173710 #yiv9749173710ygrp-msg p a {font-family:Verdana;}#yiv9749173710 #yiv9749173710ygrp-msg p#yiv9749173710attach-count span {color:#1E66AE;font-weight:700;}#yiv9749173710 #yiv9749173710ygrp-reco #yiv9749173710reco-head {color:#ff7900;font-weight:700;}#yiv9749173710 #yiv9749173710ygrp-reco {margin-bottom:20px;padding:0px;}#yiv9749173710 #yiv9749173710ygrp-sponsor #yiv9749173710ov li a {font-size:130%;text-decoration:none;}#yiv9749173710 #yiv9749173710ygrp-sponsor #yiv9749173710ov li {font-size:77%;list-style-type:square;padding:6px 0;}#yiv9749173710 #yiv9749173710ygrp-sponsor #yiv9749173710ov ul {margin:0;padding:0 0 0 8px;}#yiv9749173710 #yiv9749173710ygrp-text {font-family:Georgia;}#yiv9749173710 #yiv9749173710ygrp-text p {margin:0 0 1em 0;}#yiv9749173710 #yiv9749173710ygrp-text tt {font-size:120%;}#yiv9749173710 #yiv9749173710ygrp-vital ul li:last-child {border-right:none !important;}#yiv9749173710
opplr@hotmail.com [hercules-390]
2016-09-05 02:42:36 UTC
Permalink
Dan wrote:

"After a weekend running ONLINE and BATCH process (under OS390), we faced an awesome delay about 33 minutes. From Friday starting at 6pm to Monday 4:30am : more than 1000 BATCH program executed, 100 PCOM sessions running macros to simulate CICS transactions (1000 transaction per hour). 300 FTP clients uploading/downloading huge files, in different moments. I've collected many information about our DR and now I am trying to understand the numbers (DB2 x VSAM, CICS, Sort...).

In general my user applications are written in COBOL and Assembler. I didn't consider my development team working at same time.


How sounds for you guys?"

---------------------------------------

I'm surprised that the window was only 33 minutes longer.

YMMV, but it has been my experience on PC equipment that I/O kills performance.

It may help to use RAID or compressed DASD or uncompressed DASD or spread heavy use DASD on separate SATA/SCSI devices.

Don't suppose this is something you can do over and over again with varying physical environments?

Phil
'John P. Hartmann' jphartmann@gmail.com [hercules-390]
2016-09-02 12:30:59 UTC
Permalink
Ivan,

Your TGV reservation with SNCF also wind up in TPF aka Airline Control
Program. TPF used to be maintained from VM, but now MVS is required and
those who relied on Pipelines under CMS had a problem.

When I did some performance measurements along the lines you suggest in
conjunction with what I thought should improve the code, my experience
that was that there was no correlation between what code I changed in
Hercules and where performance changed.

My theory is that Hercules performance is governed by cache misses
because the footprint of running even the simplest instruction is
enormous. So pushing a bit of code across a cache line might have a lot
more effect than saving a single x85 instruction.
Post by Ivan Warren ***@vmfacility.fr [hercules-390]
Post by 'John P. Hartmann' ***@gmail.com [hercules-390]
Paul, out in the real world there would at least be a data base
involved. If it is a transaction processor, you will at least have CICS
or equivalent, if not (shudder) Websphere.
There is also TPF (z/TPF).. But I never run it or saw it run (I think it
is mainly used in the airline industry to provide centralized booking
systems).
Now as far as "performance" is concerned, I once ran an (automated)
daily test on various instructions to see if any change might have had
significant impact on instruction execution times.
It was an IPLable test that ran various instructions in long unrolled
loops (within a page) and that punched the results. The punch cards were
then incorporated in a DB and could be then processed by a PHP program
and then viewed in a web browser. It also relied on the timer facilities
(TOD clock). I had this service up for about 3 years.
- There was obviously some sort of "heisencache" issue... I could never
get very consistant results (from 1 day to the next I would have
sometimes 10% difference although no change had been made), possible
because a subtle difference of host instruction execution sequence, or
some background process running
- I only ran tests on a limited set of instructions (mainly loads
(L,LR,LA,LM,IC,ICM) , stores (ST, STC, STCM, STM), Branches, Moves
(MVC)) and didn't test DAT (but in hercules, DAT and Real is almost the
same)
However, it allowed detecting if some seemingly minor change could have
had some major impact through some weird side effect.
--Ivan
------------------------------------

------------------------------------

Community email addresses:
Post message: hercules-***@yahoogroups.com
Subscribe: hercules-390-***@yahoogroups.com
Unsubscribe: hercules-390-***@yahoogroups.com
List owner: hercules-390-***@yahoogroups.com

Files and archives at:
http://groups.yahoo.com/group/hercules-390

Get the latest version of Hercules from:
http://www.hercules-390.org


------------------------------------

Yahoo Groups Links

<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/hercules-390/

<*> Your email settings:
Individual Email | Traditional

<*> To change settings online go to:
http://groups.yahoo.com/group/hercules-390/join
(Yahoo! ID required)

<*> To change settings via email:
hercules-390-***@yahoogroups.com
hercules-390-***@yahoogroups.com

<*> To unsubscribe from this group, send an email to:
hercules-390-***@yahoogroups.com

<*> Your use of Yahoo Groups is subject to:
https://info.yahoo.com/legal/us/yahoo/utos/terms/
'Dan Skomsky' poodles511@sbcglobal.net [hercules-390]
2016-09-02 21:18:32 UTC
Permalink
It looks like my tests from yesterday produced nearly identical results
today. But this time, I skipped running the Assembly and link steps and
simply ran the NBENCH steps in parallel. As before, with eight concurrent
NBENCH steps executing, all HercGui and Task Manager indicators stayed maxed
out at 100%. As you can see, without running the Assembly and link steps my
I/O numbers dropped like a stone.

15:57:33.967 00000450 HHC02272I From Thu Sep 01 15:57:33 2016 to Fri Sep 02
15:57:33 2016
15:57:33.967 00000450 HHC02272I MIPS: 899.872161
15:57:33.967 00000450 HHC02272I IO/s: 527
15:57:33.967 00000450 HHC02272I Current interval is 1440 minutes

My Windows Server is at the current Microsoft update level. Over this
weekend I plan to apply some Dell BIOS and chipset updates to the T20 server
and bring it up to current level also. I will then rerun my tests. Who
knows? Maybe the numbers will change. I'll be pushing for that magic 900
MIPS level.

-----Original Message-----
From: hercules-***@yahoogroups.com [mailto:hercules-***@yahoogroups.com]
Sent: Friday, September 02, 2016 7:31 AM
To: hercules-***@yahoogroups.com
Subject: Re: [hercules-390] Stress Testing Hyperion under Windows Server.

Ivan,

Your TGV reservation with SNCF also wind up in TPF aka Airline Control
Program. TPF used to be maintained from VM, but now MVS is required and
those who relied on Pipelines under CMS had a problem.

When I did some performance measurements along the lines you suggest in
conjunction with what I thought should improve the code, my experience that
was that there was no correlation between what code I changed in Hercules
and where performance changed.

My theory is that Hercules performance is governed by cache misses because
the footprint of running even the simplest instruction is enormous. So
pushing a bit of code across a cache line might have a lot more effect than
saving a single x85 instruction.
Post by Ivan Warren ***@vmfacility.fr [hercules-390]
Post by 'John P. Hartmann' ***@gmail.com [hercules-390]
Paul, out in the real world there would at least be a data base
involved. If it is a transaction processor, you will at least have
CICS or equivalent, if not (shudder) Websphere.
There is also TPF (z/TPF).. But I never run it or saw it run (I think
it is mainly used in the airline industry to provide centralized
booking systems).
Now as far as "performance" is concerned, I once ran an (automated)
daily test on various instructions to see if any change might have had
significant impact on instruction execution times.
It was an IPLable test that ran various instructions in long unrolled
loops (within a page) and that punched the results. The punch cards
were then incorporated in a DB and could be then processed by a PHP
program and then viewed in a web browser. It also relied on the timer
facilities (TOD clock). I had this service up for about 3 years.
- There was obviously some sort of "heisencache" issue... I could
never get very consistant results (from 1 day to the next I would have
sometimes 10% difference although no change had been made), possible
because a subtle difference of host instruction execution sequence, or
some background process running
- I only ran tests on a limited set of instructions (mainly loads
(L,LR,LA,LM,IC,ICM) , stores (ST, STC, STCM, STM), Branches, Moves
(MVC)) and didn't test DAT (but in hercules, DAT and Real is almost the
same)
However, it allowed detecting if some seemingly minor change could
have had some major impact through some weird side effect.
--Ivan
------------------------------------

------------------------------------

Community email addresses:
Post message: hercules-***@yahoogroups.com
Subscribe: hercules-390-***@yahoogroups.com
Unsubscribe: hercules-390-***@yahoogroups.com
List owner: hercules-390-***@yahoogroups.com

Files and archives at:
http://groups.yahoo.com/group/hercules-390

Get the latest version of Hercules from:
http://www.hercules-390.org


------------------------------------

Yahoo Groups Links
'Dan Skomsky' poodles511@sbcglobal.net [hercules-390]
2016-09-04 17:54:56 UTC
Permalink
Well, I updated the BIOS and chipset microcode on my Dell box to the latest
and greatest levels. I then continued with my stress testing. This time I
submitted ten copies of NBENCH (with unique JOB names of course). Eight
immediately jumped into my free initiators and ran to completion. As the
first two NBENCH jobs completed, the remaining two pending jobs jumped into
the free initiators. All NBENCH jobs were full Assemble, link, and go
examples. For those who aren't familiar with full NBENCH, each JOB consists
of 35,000+ lines of Assembler source, executes 43 program steps, and
produces 72,000+ lines of SYSOUT. The average wall clock time per JOB was
8.33 minutes.



I decided this method of testing would maximize the mix of I/O and CPU usage
and give me a more realistic measurement of true machine performance. Below
are my HercGui numbers. The CPU number is slightly lower than before, but
now notice the I/O number.



12:06:19.242 00001344 HHC02272I From Sat Sep 03 12:06:19 2016 to Sun Sep 04
12:06:19 2016

12:06:19.242 00001344 HHC02272I MIPS: 812.147857

12:06:19.242 00001344 HHC02272I IO/s: 3236

12:06:19.242 00001344 HHC02272I Current interval is 1440 minutes



My gut feeling is that the high I/O capability can be attributed entirely to
the SSD HDDs. Per Windows Task Manager, my memory usage never went over 22%
(of the 16 GB installed).



For the record, the processor is an Intel Xeon CPU Quad Core E3-1225 v3 @
3.20GHz, RAM is Hyundai PC3-12800 (800 MHz), and the HDDs are LITEONIT LCTs,
SATA-III 6.0Gb/s. All components were acquired from Dell.



From: hercules-***@yahoogroups.com [mailto:hercules-***@yahoogroups.com]
Sent: Friday, September 02, 2016 4:19 PM
To: hercules-***@yahoogroups.com
Subject: RE: [hercules-390] Stress Testing Hyperion under Windows Server.





It looks like my tests from yesterday produced nearly identical results
today. But this time, I skipped running the Assembly and link steps and
simply ran the NBENCH steps in parallel. As before, with eight concurrent
NBENCH steps executing, all HercGui and Task Manager indicators stayed maxed
out at 100%. As you can see, without running the Assembly and link steps my
I/O numbers dropped like a stone.

15:57:33.967 00000450 HHC02272I From Thu Sep 01 15:57:33 2016 to Fri Sep 02
15:57:33 2016
15:57:33.967 00000450 HHC02272I MIPS: 899.872161
15:57:33.967 00000450 HHC02272I IO/s: 527
15:57:33.967 00000450 HHC02272I Current interval is 1440 minutes

My Windows Server is at the current Microsoft update level. Over this
weekend I plan to apply some Dell BIOS and chipset updates to the T20 server
and bring it up to current level also. I will then rerun my tests. Who
knows? Maybe the numbers will change. I'll be pushing for that magic 900
MIPS level.

-----Original Message-----
From: hercules-***@yahoogroups.com [mailto:hercules-***@yahoogroups.com]
Sent: Friday, September 02, 2016 7:31 AM
To: hercules-***@yahoogroups.com
Subject: Re: [hercules-390] Stress Testing Hyperion under Windows Server.

Ivan,

Your TGV reservation with SNCF also wind up in TPF aka Airline Control
Program. TPF used to be maintained from VM, but now MVS is required and
those who relied on Pipelines under CMS had a problem.

When I did some performance measurements along the lines you suggest in
conjunction with what I thought should improve the code, my experience that
was that there was no correlation between what code I changed in Hercules
and where performance changed.

My theory is that Hercules performance is governed by cache misses because
the footprint of running even the simplest instruction is enormous. So
pushing a bit of code across a cache line might have a lot more effect than
saving a single x85 instruction.
Post by Ivan Warren ***@vmfacility.fr [hercules-390]
Post by 'John P. Hartmann' ***@gmail.com [hercules-390]
Paul, out in the real world there would at least be a data base
involved. If it is a transaction processor, you will at least have
CICS or equivalent, if not (shudder) Websphere.
There is also TPF (z/TPF).. But I never run it or saw it run (I think
it is mainly used in the airline industry to provide centralized
booking systems).
Now as far as "performance" is concerned, I once ran an (automated)
daily test on various instructions to see if any change might have had
significant impact on instruction execution times.
It was an IPLable test that ran various instructions in long unrolled
loops (within a page) and that punched the results. The punch cards
were then incorporated in a DB and could be then processed by a PHP
program and then viewed in a web browser. It also relied on the timer
facilities (TOD clock). I had this service up for about 3 years.
- There was obviously some sort of "heisencache" issue... I could
never get very consistant results (from 1 day to the next I would have
sometimes 10% difference although no change had been made), possible
because a subtle difference of host instruction execution sequence, or
some background process running
- I only ran tests on a limited set of instructions (mainly loads
(L,LR,LA,LM,IC,ICM) , stores (ST, STC, STCM, STM), Branches, Moves
(MVC)) and didn't test DAT (but in hercules, DAT and Real is almost the
same)
However, it allowed detecting if some seemingly minor change could
have had some major impact through some weird side effect.
--Ivan
------------------------------------

------------------------------------

Community email addresses:
Post message: hercules-***@yahoogroups.com
Subscribe: hercules-390-***@yahoogroups.com
Unsubscribe: hercules-390-***@yahoogroups.com
List owner: hercules-390-***@yahoogroups.com

Files and archives at:
http://groups.yahoo.com/group/hercules-390

Get the latest version of Hercules from:
http://www.hercules-390.org

------------------------------------

Yahoo Groups Links
'John P. Hartmann' jphartmann@gmail.com [hercules-390]
2016-09-02 09:31:44 UTC
Permalink
Nevertheless, this thread shows that "we" (Hyperion) need a performance
test suite and a documented way to run such suite, and then a database
of results (hardware, os, Hyperion commit).

This is the only way we will be able to speak meaningfully about
performance.

Any volunteers?

Please,

j.
Post by 'Dave G4UGM' ***@gmail.com [hercules-390]
-----Original Message-----
Sent: 02 September 2016 00:44
Subject: Re: [hercules-390] Stress Testing Hyperion under Windows Server.
Why, do you think Windows is efficient with low overheads ?
Now do the same under Linux - it will be a lot better and without 100%
util.
If you are not getting 100% CPU util then you have an i/o bottle neck....
Its been a while since I did comparative testing but in fact there is
usually only a small difference between the performance on Windows and
Linux and for some releases Windows is faster.
Modern Linux is also getting bloated. Rather than spout advertising,
please post verified facts.
Dave
Loading...