Discussion:
S/360 writer, etc. abends at IPL
(too old to reply)
somitcw
2002-10-23 01:20:48 UTC
Permalink
The S2F3 abend means that you IPLed while a WTR was
running so it ABENDed after the IPL. If you stop the
WTR, RDR, INIT, PUN before IPL, they won't abend.
The command is:
P 00E
P 00A
P 00D
P INIT ? ? ? I did MFT and never used MVT INITs

The S200 abends started in MVT with a change in
Hercules. My wild guess would the 2K protect key
does not match what the channel expects.
Two ways around the problem are:
Use a slightly older copy of Hercules
Use MVS instead of MVT

With a CD downloaded, you could have MVS running
in a few minutes with much less problems.
I just started going through the procedure to set up TSO
and couldn't get the first job to work. Upon examining
the system logs and printed output, it looks like
something's wrong with the writers. The system
- - -


------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
marysmiling2002
2002-10-23 01:28:59 UTC
Permalink
This happens when I simply start hercules and then IPL after
connecting consoles.

Okay, I'm into running MVS. That's better anyway. Sorry for my
ignorance, but where do I get this CD?

Thanks,
--Dan
Post by somitcw
The S2F3 abend means that you IPLed while a WTR was
running so it ABENDed after the IPL. If you stop the
WTR, RDR, INIT, PUN before IPL, they won't abend.
P 00E
P 00A
P 00D
P INIT ? ? ? I did MFT and never used MVT INITs
The S200 abends started in MVT with a change in
Hercules. My wild guess would the 2K protect key
does not match what the channel expects.
Use a slightly older copy of Hercules
Use MVS instead of MVT
With a CD downloaded, you could have MVS running
in a few minutes with much less problems.
I just started going through the procedure to set up TSO
and couldn't get the first job to work. Upon examining
the system logs and printed output, it looks like
something's wrong with the writers. The system
- - -
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Hugo Drax
2002-10-23 03:32:48 UTC
Permalink
I notice if I disconnect my cuu0c1 I have to re inact and act in vtam my session is there some kind of way to simulate on another device something like a 3746 or other FEP? or can I tie Microsoft SNA server to MVS3.8J that way I can access my mainframe from work or other locations.


[Non-text portions of this message have been removed]


------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
S. Vetter
2002-10-23 04:03:12 UTC
Permalink
This has been discussed before. At present the answer is no (to both questions). Sorry, perhaps one day.

Scott

------------
Post by Hugo Drax
I notice if I disconnect my cuu0c1 I have to re inact and act in vtam my session is there some kind of way to simulate on another device something like a 3746 or other FEP? or can I tie Microsoft SNA server to MVS3.8J that way I can access my mainframe from work or other locations.
[Non-text portions of this message have been removed]
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Sam Knutson
2002-10-23 01:51:33 UTC
Permalink
Post by marysmiling2002
Okay, I'm into running MVS. That's better anyway. Sorry for my
ignorance, but where do I get this CD?
http://www.bsp-gmbh.com/turnkey/tk3_faq.html#q001

http://www.cbttape.org/cdrom.htm



Best Regards,

Sam Knutson
mailto:sam-***@public.gmane.org
My Home Page http://www.knutson.org
CBT Tape http://www.cbttape.org



------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Hugo Drax
2002-10-23 03:43:55 UTC
Permalink
I never had the opportunity to touch a mainframe (born in 71) I live in the world of TCP/IP,Servers,routers etc... but I have always found the big iron a mystery and something I wanted to get some experience, my closest experience was when I was a entry level tech some time ago and I worked on IDEA/IBM controllers+3270 terms I remember having to call Techsupport to re-vary or gen stuff but I never saw the magic, amazing to be doing this at home :). Anyhow Great program and Im having fun trying to figure this out and learning the ropes. I terminal serviced to my home PC with the open session and the Mainframe guys were amazed at seeing this product, one of the guys went in and stated messing with it and was truly amazed. Anyhow they are pretty helpful and printed out some useful references,showed me some useful commands etc. My question is why wont IBM promote something like this to students/hobbiests and others in educational fields? I honestly think the secret to getting mo
re people interested in MVS etc.. is to make it available to the hobbiest/student or educational field. Pretty powerful OS lots of facilities it is much more enterprise than unix in what Im seeing so far.


[Non-text portions of this message have been removed]


------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
S. Vetter
2002-10-23 04:22:42 UTC
Permalink
A couple of reasons based on past messages and my own thoughts. 1) The cost of support and distribution, which if IBM did this would reduce their profits. 2) There could be some that would use it for commercial purposes but getting little in return. 3) The potential for liability. 4) This may impact other vendors, be it in selling systems, support, and training.

Of course I could elaborate on them, and think of a few others, but it's getting late...


Scott

-------
Post by Hugo Drax
I never had the opportunity to touch a mainframe (born in 71) I live in the world of TCP/IP,Servers,routers etc... but I have always found the big iron a mystery and something I wanted to get some experience, my closest experience was when I was a entry level tech some time ago and I worked on IDEA/IBM controllers+3270 terms I remember having to call Techsupport to re-vary or gen stuff but I never saw the magic, amazing to be doing this at home :). Anyhow Great program and Im having fun trying to figure this out and learning the ropes. I terminal serviced to my home PC with the open session and the Mainframe guys were amazed at seeing this product, one of the guys went in and stated messing with it and was truly amazed. Anyhow they are pretty helpful and printed out some useful references,showed me some useful commands etc. My question is why wont IBM promote something like this to students/hobbiests and others in educational fields? I honestly think the secret to getting
more
Post by Hugo Drax
people interested in MVS etc.. is to make it available to the hobbiest/student or educational field. Pretty powerful OS lots of facilities it is much more enterprise than unix in what Im seeing so far.
[Non-text portions of this message have been removed]
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Greg Smith
2002-10-23 05:27:55 UTC
Permalink
I think Hugo will go far if he remembers to clean his brown nose off
once in a while ;-)

IBM is a dollar and cents company. If the higher-ups can be convinced
that a cheap/free
hobbyist license will earn them a return then they may go for it.
Someway/somewhere/
somehow a business plan must be proposed. For us down in the trenches
it's a no-brainer,
but.... And as Scott points out, we need some kind of certification that
hercules runs
stuff like it's supposed to. Again, down here it the dirt, we know that
it does, mostly.

All I can suggest is start making the wheel very squeaky.

Greg
Post by S. Vetter
A couple of reasons based on past messages and my own thoughts. 1) The cost of support and distribution, which if IBM did this would reduce their profits. 2) There could be some that would use it for commercial purposes but getting little in return. 3) The potential for liability. 4) This may impact other vendors, be it in selling systems, support, and training.
Of course I could elaborate on them, and think of a few others, but it's getting late...
Scott
-------
I never had the opportunity to touch a mainframe (born in 71) I live in the world of TCP/IP,Servers,routers etc... but I have always found the big iron a mystery and something I wanted to get some experience, my closest experience was when I was a entry level tech some time ago and I worked on IDEA/IBM controllers+3270 terms I remember having to call Techsupport to re-vary or gen stuff but I never saw the magic, amazing to be doing this at home :). Anyhow Great program and Im having fun trying to figure this out and learning the ropes. I terminal serviced to my home PC with the open session and the Mainframe guys were amazed at seeing this product, one of the guys went in and stated messing with it and was truly amazed. Anyhow they are pretty helpful and printed out some useful references,showed me some useful commands etc. My question is why wont IBM promote something like this to students/hobbiests and others in educational fields? I honestly think the secret to getting!
more
people interested in MVS etc.. is to make it available to the hobbiest/student or educational field. Pretty powerful OS lots of facilities it is much more enterprise than unix in what Im seeing so far.
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
vmlamer
2002-10-24 16:57:00 UTC
Permalink
Post by Hugo Drax
I never had the opportunity to touch a mainframe (born in 71) I live
in the world of TCP/IP,Servers,routers heh heh heh

Frankly, if you were born in 71 and consider mvs to be anything other
than complete crap compared to unix, then one has to question if you
know either very well.


------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Adam Thornton
2002-10-24 17:42:17 UTC
Permalink
Post by vmlamer
Frankly, if you were born in 71 and consider mvs to be anything other
than complete crap compared to unix, then one has to question if you
know either very well.
*I* was born in 1971.

I consider MVS to be something other than complete crap compared to
Unix. There are tasks I consider more appropriately handled by MVS,
and tasks I think Unix is more appropriate for.

I don't know MVS very well, but I do know Unix quite well.

Perhaps if you explained yourself a bit more I'd understand what you
mean, since it probably isn't "MVS 5uXX0rZ!!1! Un1X Ru13Z!11!" which is
what it came across as.

Adam
--
adam-uX/***@public.gmane.org
"My eyes say their prayers to her / Sailors ring her bell / Like a moth
mistakes a light bulb / For the moon and goes to hell." -- Tom Waits

------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Jay Maynard
2002-10-24 19:53:01 UTC
Permalink
Post by vmlamer
Post by Hugo Drax
I never had the opportunity to touch a mainframe (born in 71) I live
in the world of TCP/IP,Servers,routers heh heh heh
Frankly, if you were born in 71 and consider mvs to be anything other
than complete crap compared to unix, then one has to question if you
know either very well.
I bet you're the same bozo who posted a similar message in
alt.folklore.computers.

As I said there, the different OSes have different strengths. MVS is
uniquely suited for industrial strength batch processing, while Unix and VM
are suited more to interactive computing.

------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Hugo Drax
2002-10-24 20:32:41 UTC
Permalink
Is there a book or doc on beginning TSO? maybe other related docs?


[Non-text portions of this message have been removed]


------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Jeffrey R. Broido
2002-10-24 19:32:27 UTC
Permalink
Mr. Lamer,

Oh, what a juicy statement. I expect you'll hear from a lot of us;
let me be one of the first. Not only is z/OS, OS/390 or any flavor
of MVS not complete crap, but even creaky, old OS/360 runs rings
around most operating systems, past and present.

Current versions of MVS, from OS/390 2.10 through z/OS 1.4, along
with the truly great IBM hardware they run on, represent the finest
production tool the world has ever known.

Of course, you can counter my strong expression of opinion with your
own strong expression of opinion, perhaps as boorish as the one I'm
presently typing a response to, and I urge you to do so as long as
you can help keep the discussion civil.

I have nothing whatsoever against Unix, mind you. I was using Unix
on an H-P engineering workstation I bought in 1985 and became quite
fond of it, even nasty, old VI, and I've happily used other
production operating systems on big iron including Multics, TENEX,
TOPS-10, VMS, VM/CMS, GCOS and even a little time with TSS, but
nothing I've come across can compare with MVS for stability,
throughput or security. Back in 1972, I went to work for a shop
which was running OS/360 MVT and eventually MVS. Even on our
original half MIPS machine, with outboard channel boxes and a mere
2.75Mb of honest-to-goodness ferite core memory, we supported close
to 200 concurrent timesharing users on TTYs and IBM 2741s with sub
half second response time. This may not mean much to you, but that
was a big deal in those days. Today, I doubt you'll find another
software/hardware combination which will stay up for months at a
time and can support dozens of interactive users per MIPS.

You don't like MVS? Fine. You like Unix or CMS? Equally fine.
But that doesn't make MVS "complete crap" as you say. Most
computers may be binary, but that doesn't mean that all points of
view must be binary. If you like A better than you like B, you
don't have to denegrate B to support A. Think about it.

Broido
Post by vmlamer
Frankly, if you were born in 71 and consider mvs to be
anything other than complete crap compared to unix, then
one has to question if you know either very well.
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
b***@public.gmane.org
2002-10-27 10:36:03 UTC
Permalink
He's right.

Unless you stuff a /*eof into an intrdr.

(Remember that bug, Jeff?)

"Welcome to HASP 2.11. In this new version, the reading, printing, punching
and plotting of job output is no longer supported. Also, the old 'program
execution' feature has been removed.'

----- Original Message -----
From: "Jeffrey R. Broido" <broidoj-***@public.gmane.org>
To: <hercules-390-***@public.gmane.org>
Sent: Thursday, October 24, 2002 8:32 PM
Subject: [hercules-390] Re: Running a mainframe at home is a interesting IBM
needs to listen
Post by Jeffrey R. Broido
Mr. Lamer,
Oh, what a juicy statement. I expect you'll hear from a lot of us;
let me be one of the first. Not only is z/OS, OS/390 or any flavor
of MVS not complete crap, but even creaky, old OS/360 runs rings
around most operating systems, past and present.
Current versions of MVS, from OS/390 2.10 through z/OS 1.4, along
with the truly great IBM hardware they run on, represent the finest
production tool the world has ever known.
Of course, you can counter my strong expression of opinion with your
own strong expression of opinion, perhaps as boorish as the one I'm
presently typing a response to, and I urge you to do so as long as
you can help keep the discussion civil.
I have nothing whatsoever against Unix, mind you. I was using Unix
on an H-P engineering workstation I bought in 1985 and became quite
fond of it, even nasty, old VI, and I've happily used other
production operating systems on big iron including Multics, TENEX,
TOPS-10, VMS, VM/CMS, GCOS and even a little time with TSS, but
nothing I've come across can compare with MVS for stability,
throughput or security. Back in 1972, I went to work for a shop
which was running OS/360 MVT and eventually MVS. Even on our
original half MIPS machine, with outboard channel boxes and a mere
2.75Mb of honest-to-goodness ferite core memory, we supported close
to 200 concurrent timesharing users on TTYs and IBM 2741s with sub
half second response time. This may not mean much to you, but that
was a big deal in those days. Today, I doubt you'll find another
software/hardware combination which will stay up for months at a
time and can support dozens of interactive users per MIPS.
You don't like MVS? Fine. You like Unix or CMS? Equally fine.
But that doesn't make MVS "complete crap" as you say. Most
computers may be binary, but that doesn't mean that all points of
view must be binary. If you like A better than you like B, you
don't have to denegrate B to support A. Think about it.
Broido
Post by vmlamer
Frankly, if you were born in 71 and consider mvs to be
anything other than complete crap compared to unix, then
one has to question if you know either very well.
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Hugo Drax
2002-10-24 20:24:49 UTC
Permalink
I been messing with it and reading on it it seems to have much more detailed process control/batch control than unix. no where as friendly as unix but it seems like an os designed for bigger type jobs. I know at my job they have 65,000+ PCs that access S390 all the time would disagree, I get the impression that MVS is more robust in terms of heavyduty processing like banking,payroll etc. critical stuff. Also the Logging seems much more detailed than my linux 8 especially and this is an old obsolete version I wish I could see what RACF and the new stuff looks like.
----- Original Message -----
From: vmlamer
To: hercules-390-***@public.gmane.org
Sent: Thursday, October 24, 2002 12:57 PM
Subject: [hercules-390] Re: Running a mainframe at home is a interesting IBM needs to listen
Post by Hugo Drax
I never had the opportunity to touch a mainframe (born in 71) I live
in the world of TCP/IP,Servers,routers heh heh heh

Frankly, if you were born in 71 and consider mvs to be anything other
than complete crap compared to unix, then one has to question if you
know either very well.


Yahoo! Groups Sponsor
ADVERTISEMENT




Community email addresses:
Post message: hercules-390-***@public.gmane.org
Subscribe: hercules-390-subscribe-***@public.gmane.org
Unsubscribe: hercules-390-unsubscribe-***@public.gmane.org
List owner: hercules-390-owner-***@public.gmane.org

Files and archives at:
http://groups.yahoo.com/group/hercules-390

Get the latest version of Hercules from:
http://www.conmicro.cx/hercules

Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.



[Non-text portions of this message have been removed]


------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
S. Vetter
2002-10-24 20:42:30 UTC
Permalink
OK, I'm going to add my 4 cents worth to this seemingly emotional conversation.

Each OS has it's strengths and weaknesses and is based on the designer's goals. Sure I wouldn't want to do word processing on a mainframe, and I sure wouldn't want to do anything business critical on a PC running Windows. As for UNIX being "friendlier", TSO is better as it doesn't have cryptic commands.

And to be equal, a VM CMS user has some pretty powerful capabilities where it can run multiple OS's and can handle more online users than MVS TSO. But you can't do (at least the VM Rel. 6) do batch well or share resources (like files). On the PC side, sure you now have something similar to VM, but that is limited to a few OSs.

Scott

---------------
Post by Hugo Drax
I been messing with it and reading on it it seems to have much more detailed process control/batch control than unix. no where as friendly as unix but it seems like an os designed for bigger type jobs. I know at my job they have 65,000+ PCs that access S390 all the time would disagree, I get the impression that MVS is more robust in terms of heavyduty processing like banking,payroll etc. critical stuff. Also the Logging seems much more detailed than my linux 8 especially and this is an old obsolete version I wish I could see what RACF and the new stuff looks like.
----- Original Message -----
From: vmlamer
Sent: Thursday, October 24, 2002 12:57 PM
Subject: [hercules-390] Re: Running a mainframe at home is a interesting IBM needs to listen
Post by Hugo Drax
I never had the opportunity to touch a mainframe (born in 71) I live
in the world of TCP/IP,Servers,routers heh heh heh
Frankly, if you were born in 71 and consider mvs to be anything other
than complete crap compared to unix, then one has to question if you
know either very well.
Yahoo! Groups Sponsor
ADVERTISEMENT
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.
[Non-text portions of this message have been removed]
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Jeffrey R. Broido
2002-10-24 19:33:06 UTC
Permalink
Mr. Lamer,

Oh, what a juicy statement. I expect you'll hear from a lot of us;
let me be one of the first. Not only is z/OS, OS/390 or any flavor
of MVS not complete crap, but even creaky, old OS/360 runs rings
around most operating systems, past and present.

Current versions of MVS, from OS/390 2.10 through z/OS 1.4, along
with the truly great IBM hardware they run on, represent the finest
production tool the world has ever known.

Of course, you can counter my strong expression of opinion with your
own strong expression of opinion, perhaps as boorish as the one I'm
presently typing a response to, and I urge you to do so as long as
you can help keep the discussion civil.

I have nothing whatsoever against Unix, mind you. I was using Unix
on an H-P engineering workstation I bought in 1985 and became quite
fond of it, even nasty, old VI, and I've happily used other
production operating systems on big iron including Multics, TENEX,
TOPS-10, VMS, VM/CMS, GCOS and even a little time with TSS, but
nothing I've come across can compare with MVS for stability,
throughput or security. Back in 1972, I went to work for a shop
which was running OS/360 MVT and eventually MVS. Even on our
original half MIPS machine, with outboard channel boxes and a mere
2.75Mb of honest-to-goodness ferite core memory, we supported close
to 200 concurrent timesharing users on TTYs and IBM 2741s with sub
half second response time. This may not mean much to you, but that
was a big deal in those days. Today, I doubt you'll find another
software/hardware combination which will stay up for months at a
time and can support dozens of interactive users per MIPS.

You don't like MVS? Fine. You like Unix or CMS? Equally fine.
But that doesn't make MVS "complete crap" as you say. Most
computers may be binary, but that doesn't mean that all points of
view must be binary. If you like A better than you like B, you
don't have to denegrate B to support A. Think about it.

Broido
Post by vmlamer
Frankly, if you were born in 71 and consider mvs to be
anything other than complete crap compared to unix, then
one has to question if you know either very well.
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Gregg C Levine
2002-10-24 20:12:26 UTC
Permalink
Hello from Gregg C Levine
Excuse me? I was born in 1962. And when I got involved with computers,
it happens, that the big thing on campus, was probably OS/360. It is
more capable then MVS, now, and even then, when MVS was evolving out of
probably MFT. I strongly suggest you tone down your postings, especially
since a lot of us, are also using Hercules to try out, different
solutions for Linux/390. Adam? You were born in '71? Wow...... Who here
is closest to my age? Bomber, that was a well written message, (I can
use your nickname here?). This technology is, well, peculiar, and
rightly so, it was designed early on. And the software is also peculiar,
the early OSes, with the exception of Linux/390 is all we have to work
with. Jay, the word bozo is a twenty point score, if you're playing
scrabble that is.
-------------------
Gregg C Levine hansolofalcon-XfrvlLN1Pqtfpb/***@public.gmane.org
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke."  Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Thursday, October 24, 2002 12:57 PM
Subject: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
Post by Hugo Drax
I never had the opportunity to touch a mainframe (born in 71) I live
in the world of TCP/IP,Servers,routers heh heh heh
Frankly, if you were born in 71 and consider mvs to be anything other
than complete crap compared to unix, then one has to question if you
know either very well.
------------------------ Yahoo! Groups Sponsor
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
rvjansen-qWit8jRvyhVmR6Xm/
2002-10-24 21:33:25 UTC
Permalink
Hi Gregg,

well, I'm also from 1962, since you asked.
We should not worry about people who do not know what they are talking
about, or are plainly trolling.
Also, we would not want to offend bozo by comparing him to this ignorant
youth.
Post by Gregg C Levine
Hello from Gregg C Levine
Excuse me? I was born in 1962. And when I got involved with computers,
it happens, that the big thing on campus, was probably OS/360. It is
more capable then MVS, now, and even then, when MVS was evolving out of
probably MFT. I strongly suggest you tone down your postings, especially
since a lot of us, are also using Hercules to try out, different
solutions for Linux/390. Adam? You were born in '71? Wow...... Who here
is closest to my age? Bomber, that was a well written message, (I can
use your nickname here?). This technology is, well, peculiar, and
rightly so, it was designed early on. And the software is also peculiar,
the early OSes, with the exception of Linux/390 is all we have to work
with. Jay, the word bozo is a twenty point score, if you're playing
scrabble that is.
-------------------
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke."  Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Thursday, October 24, 2002 12:57 PM
Subject: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
Post by Hugo Drax
I never had the opportunity to touch a mainframe (born in 71) I live
in the world of TCP/IP,Servers,routers heh heh heh
Frankly, if you were born in 71 and consider mvs to be anything other
than complete crap compared to unix, then one has to question if you
know either very well.
------------------------ Yahoo! Groups Sponsor
   http://groups.yahoo.com/group/hercules-390
   http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
  http://groups.yahoo.com/group/hercules-390
  http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Gregg C Levine
2002-10-24 22:04:07 UTC
Permalink
Hello from Gregg C Levine
Actually, I agree. But I wasn't comparing the clown to that joker. Jay
opened the door for that one, when he used it to describe the troll.
Indeed. I won't worry about him.
-------------------
Gregg C Levine hansolofalcon-XfrvlLN1Pqtfpb/***@public.gmane.org
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke."  Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Thursday, October 24, 2002 5:33 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
Hi Gregg,
well, I'm also from 1962, since you asked.
We should not worry about people who do not know what they are talking
about, or are plainly trolling.
Also, we would not want to offend bozo by comparing him to this ignorant
youth.
Post by Gregg C Levine
Hello from Gregg C Levine
Excuse me? I was born in 1962. And when I got involved with
computers,
Post by Gregg C Levine
it happens, that the big thing on campus, was probably OS/360. It is
more capable then MVS, now, and even then, when MVS was evolving out of
probably MFT. I strongly suggest you tone down your postings, especially
since a lot of us, are also using Hercules to try out, different
solutions for Linux/390. Adam? You were born in '71? Wow...... Who here
is closest to my age? Bomber, that was a well written message, (I can
use your nickname here?). This technology is, well, peculiar, and
rightly so, it was designed early on. And the software is also peculiar,
the early OSes, with the exception of Linux/390 is all we have to work
with. Jay, the word bozo is a twenty point score, if you're playing
scrabble that is.
-------------------
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke."  Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Thursday, October 24, 2002 12:57 PM
Subject: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
Post by Hugo Drax
I never had the opportunity to touch a mainframe (born in 71) I live
in the world of TCP/IP,Servers,routers heh heh heh
Frankly, if you were born in 71 and consider mvs to be anything other
than complete crap compared to unix, then one has to question if you
know either very well.
------------------------ Yahoo! Groups Sponsor
   http://groups.yahoo.com/group/hercules-390
   http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
  http://groups.yahoo.com/group/hercules-390
  http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.
------------------------ Yahoo! Groups Sponsor
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Bernhard Graf
2002-10-25 07:55:00 UTC
Permalink
Hi Gregg,

well, I am also one of the 62ers :-)
My opinion as a primary UNIX addict is as follows :
a) I don't think you can say any Host OS is really crap, as, like
people stated before, 20 and more years ago these OSes were capable
of supporting hundreds of users using 384 kB (not MB !) of memory.
Another thing that is just today absolutely hip, the idea of the
virtual machine, aka VMware, is an idea that IBM realized already in
the dark ages on these machines (see the hercules-vm forum for that)
Actually, IBM even went one step further than anybody has come in the
PC/Midrange world today : Remember, before the IPL there once was an
IMPL ! (I still know this as at the university I had set up and
managed an IBM 370 running DOS VS). IMPL means that the processor
first loaded its microcode and then booted its kernel based on that
microcode. This was done so that IBM 360 software could run on the
newer 370 processors. Transferred to today would mean :
Boot your PC, decide whether you want to load the ALU with a Pentium,
PPC, PARISC, UltraSPARC or any other commendset and then boot the
corresponding OS ! I haven't seen this so far, have you ?
b) Anything you say, you say out of a different personal view. I also
would not want to offend bozo due to his youth and his admitted
inexperience. I mean, if you started computing with the ATARI and
your primary fun is playing invaders and xgalaga, well then MVS is
complete crap for that ;-) Because, actually it was made for
something completely different.
c) Still I would recommend that bozo should use MVS, VM and VSE a
little bit, as he can learn A LOT from it, how computers actually
work ! I really must admit that I learned how computers work from
this old 370. The computing experience that I had before was an
Apple ][, which was a great machine also and did anything automatic.
If you wanted to create a file, you just opened one with an editor
and saved it. It worked just like magic. When I started DOS VS on the
other hand, I first had to make a plan of the harddisk, count sectors
and cylinders, fiddle around with disklabels and extents and for the
first time KNEW what my Apple ][ really did, when the disk drive was
spinning. So, even though DOS VS was in my point of view of those
days also crap compared to the Apple DOS, I learned a lot of it which
helped me in later days to understand the Apple OS and any other that
followed.

So, all in all I think you should not flame bozo too much, but
instead encourage him to continue to play with these old dinosaurs.
So in that respect, we all have created our own "Jurrasic Park" on
our PCs ;-)

Bernhard

PS : And, by the way, could the others please stop to attach all the
100eds of kilobytes of old messages to the replies ? It makes the
thread really hard to read !
Post by rvjansen-qWit8jRvyhVmR6Xm/
Hi Gregg,
well, I'm also from 1962, since you asked.
We should not worry about people who do not know what they are
talking
Post by rvjansen-qWit8jRvyhVmR6Xm/
about, or are plainly trolling.
Also, we would not want to offend bozo by comparing him to this ignorant
youth.
Post by Gregg C Levine
Hello from Gregg C Levine
Excuse me? I was born in 1962. And when I got involved with
computers,
Post by rvjansen-qWit8jRvyhVmR6Xm/
Post by Gregg C Levine
it happens, that the big thing on campus, was probably OS/360.
------------------------ Yahoo! Groups Sponsor ---------------------~-->
4 DVDs Free +s&p Join Now
http://us.click.yahoo.com/pt6YBB/NXiEAA/jd3IAA/W4wwlB/TM
---------------------------------------------------------------------~->
Willem Konynenberg
2002-10-25 12:48:12 UTC
Permalink
Post by Bernhard Graf
Boot your PC, decide whether you want to load the ALU with a Pentium,
PPC, PARISC, UltraSPARC or any other commendset and then boot the
corresponding OS ! I haven't seen this so far, have you ?
Transmeta Crusoe? ;-)

The microcode "storage medium" has changed over time...
These days it tends to be stored in on-chip ROM, but for example
the Pentium had a facility to update it in the field.

It wouldn't allow you to choose between Pentium and PPC at boot
time, but I don't think the choices with the 370 were quite as
dramatic. I suppose Intel *could* have supported minor variations
in the CPU programming model offered, although they obviously
had little incentive to do so.

Hmm, another example is perhaps the Alpha PALcode.
One loads the PALcode that provides the low-level CPU programming
interface that one's operating system expects. This isn't a "BIOS
ROM", it's a level below that, but it is a level above microcode.
--
Willem Konynenberg <w.f.konynenberg-/NLkJaSkS4VmR6Xm/***@public.gmane.org>
Konynenberg Software Engineering

------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Gordon R. Keehn
2002-10-25 14:32:55 UTC
Permalink
We're digging up memories that have been archived for a lot of years, but it
seems to me there was a variant of the PDP-11 (predecessor of the VAX boxes)
back in the '70s that had user-modifiable microcode. The main purpose /
advantage of loadable microcode was / is to allow the hardware vendor to
touch up machine logic errors without having to recall thousands of processor
modules. It's a fine thing when not only is machine code a high-level
language, but an interpreted one at that!

--

----
Gordon R. Keehn, CPSM Change Team
CICS/390 Service, USA
Post by Willem Konynenberg
Post by Bernhard Graf
Boot your PC, decide whether you want to load the ALU with a Pentium,
PPC, PARISC, UltraSPARC or any other commendset and then boot the
corresponding OS ! I haven't seen this so far, have you ?
Transmeta Crusoe? ;-)
The microcode "storage medium" has changed over time...
These days it tends to be stored in on-chip ROM, but for example
the Pentium had a facility to update it in the field.
It wouldn't allow you to choose between Pentium and PPC at boot
time, but I don't think the choices with the 370 were quite as
dramatic. I suppose Intel *could* have supported minor variations
in the CPU programming model offered, although they obviously
had little incentive to do so.
Hmm, another example is perhaps the Alpha PALcode.
One loads the PALcode that provides the low-level CPU programming
interface that one's operating system expects. This isn't a "BIOS
ROM", it's a level below that, but it is a level above microcode.
--
Konynenberg Software Engineering
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Todd Enders
2002-10-25 01:21:31 UTC
Permalink
Post by Gregg C Levine
Excuse me? I was born in 1962. And when I got involved with computers,
it happens, that the big thing on campus, was probably OS/360. It is
more capable then MVS, now, and even then, when MVS was evolving out of
probably MFT. I strongly suggest you tone down your postings, especially
since a lot of us, are also using Hercules to try out, different
solutions for Linux/390. Adam? You were born in '71? Wow...... Who here
is closest to my age? [...]
Well, Gregg, I'm not too far behind you, being born in 1961. When
I went to university, they were running MVS 3.8, and I thought it was the
greatest thing since sliced bread. :-) Now, having a virtual system running
the self-same OS really *is* the greatest thing since sliced bread! :-)

Still doing application programming for the same university, and using
MVS and TSO every day. Sure, time has brought its share of improvements,
but the basic MVS knowlege I got from 3.8 all those years ago *still* holds
me in good stead. The basic MVS utilities work just as well (and just the same
way) as they ever did. JCL is still JCL. The stability of MVS is a remarkable
contrast to the evolution of PC operating systems. :-)

Sure, CICS, ISPF, DB/2, etc., along with a more recent version of MVS
would be nice to have available for us hobby types, but what we *already*
have available is pretty darn nice too. I certainly don't mean to throw water
on all the good folks out there who are trying to improve things for us, but one
can learn a *lot* of useful stuff from MVS 3.8 and all the other goodies
available to run under it that are out there *now*, without hassle. The only
thing I would wish for is more ready availability of the MVS 3.8 era manuals.
Pity IBM won't see fit to at least make a CD available with the full doc set,
or allow scans/PDFs of the originals to be put up online someplace. :-(

Todd



------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
marysmiling2002
2002-10-25 02:03:52 UTC
Permalink
Been watching this thread a little. Thought I'd throw in my $.02
worth. Hope I don't get flamed.

I also started computing in the glory days of MVS. (Took CIS-101 in
1979). I've been away for a long time, writing software for Windows,
UNIX, and AS/400. I've always sort of wanted to revisit that era,
because it almost seems like we've lost something over the years, and
I thought it would be interesting to go back and try to identify it.
(I know that sounds vague, but I can't think of a concise way to say
it. Software engineering now doesn't seem to produce products as
efficient, reliable, scalable, etc. as it did in the old days, back
when it *WAS* rocket science, even though we have much more advanced
tools now.)

At any rate, now that I've found a way to go back and learn MVS
again, I'm processing all of the differences and trying to answer the
questions.

The main difference seems to be that the economics have changed
dramatically.

In those days, it made sense to have a team of maybe hundreds, a
large percentage of whom were the world experts in their fields,
spend years designing a single system (originally the S/360). Every
such system sold would go for from hundreds of thousands to millions
of dollars, and would be shared by many people. Entire corporations
would have a single computer that everyone shared. Small advances in
efficiency had huge effects on the bottom line. The architecture had
to be very flexible and scalable.

Nowadays, due mainly to cost factors, it's cheaper (in many ways--not
all) to use a different computer for each task. Computers are
dedicated to a single purpose. A mail server does nothing but be a
mail server. A desktop PC usually only does a handful of things (a
few productivity applications, web access).

The main reason for this cost savings is that the PC, or UNIX box
doesn't have to be engineered to the same level as the mainframe did.
Instead of everything being as good as possible (as in the mainframe
world), it is now sufficient to have everthing be merely good enough.
The impact of a PC crashing today is on a similar level to the impact
of a single job abending in the old days. Everything else continues
to work fine, and the PC is back up in a few minutes.

What amazes me is that the costs for processing power have changed so
much that we can now have the equivalent of a big mainframe on our
desktops. The only difference between the emulated mainframe and the
plain old PC is the *DESIGN* of the mainframe is so much more
general, robust, and efficient. The mainframe doesn't do very much
without a lot of programming effort, but it does the job of being a
computer better than any other architecture ever designed. It's just
that it was created back in an era when the IT department did its
own programming (both system and application).

I think we would do well to learn all we can from the mainframe,
because the present economic reality is not likely to result in any
single design that is nearly as good anytime in our lifetime.
Hardware capability has become cheap enough where it's a lot more
cost effective to throw a lot of hardware at a problem than to write
the best software we can.

The sad thing for me (being a software engineer) is that we seem to
have regressed from where we once were in terms of the state of the
art in engineering.

The mainframe isn't going to die any time soon. There are still
applications out there that require its unique capabilities
(flexibility, massive scale, security, reliability, efficiency,
bandwidth). But it will never dominate the world again like it once
did. It still seems to me that we would do well as modern engineers
to understand the mainframe, just so we don't lose touch with what we
once understood.

Nowadays people don't even know what efficiency means any more. There
are a lot of modern, schooled engineers who cut their teeth on UNIX,
that think having hundreds of users share a machine with less than a
megabyte of main storage and half a MIP of CPU power is an unsolvable
problem. That's really sad. Because of that, I'm telling anyone I can
about Hercules in hopes that I can generate some interest in it as a
learning tool.

Plus, it's very nostalgic and fun at the same time.

Regards,
--Dan
Post by Todd Enders
Post by Gregg C Levine
Excuse me? I was born in 1962. And when I got involved with
computers,
Post by Todd Enders
Post by Gregg C Levine
it happens, that the big thing on campus, was probably OS/360. It is
more capable then MVS, now, and even then, when MVS was evolving out of
probably MFT. I strongly suggest you tone down your postings,
especially
Post by Todd Enders
Post by Gregg C Levine
since a lot of us, are also using Hercules to try out, different
solutions for Linux/390. Adam? You were born in '71? Wow...... Who here
is closest to my age? [...]
Well, Gregg, I'm not too far behind you, being born in 1961.
When
Post by Todd Enders
I went to university, they were running MVS 3.8, and I thought it was the
greatest thing since sliced bread. :-) Now, having a virtual
system running
Post by Todd Enders
the self-same OS really *is* the greatest thing since sliced
bread! :-)
Post by Todd Enders
Still doing application programming for the same university,
and using
Post by Todd Enders
MVS and TSO every day. Sure, time has brought its share of
improvements,
Post by Todd Enders
but the basic MVS knowlege I got from 3.8 all those years ago
*still* holds
Post by Todd Enders
me in good stead. The basic MVS utilities work just as well (and just the same
way) as they ever did. JCL is still JCL. The stability of MVS is a remarkable
contrast to the evolution of PC operating systems. :-)
Sure, CICS, ISPF, DB/2, etc., along with a more recent
version of MVS
Post by Todd Enders
would be nice to have available for us hobby types, but what we *already*
have available is pretty darn nice too. I certainly don't mean to throw water
on all the good folks out there who are trying to improve things for us, but one
can learn a *lot* of useful stuff from MVS 3.8 and all the other goodies
available to run under it that are out there *now*, without
hassle. The only
Post by Todd Enders
thing I would wish for is more ready availability of the MVS 3.8 era manuals.
Pity IBM won't see fit to at least make a CD available with the full doc set,
or allow scans/PDFs of the originals to be put up online
someplace. :-(
Post by Todd Enders
Todd
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Peter J Farley III
2002-10-25 00:58:50 UTC
Permalink
Who here is closest to my age? Bomber, that was a well written
message, (I can use your nickname here?).
Well, I'm not near you, but I am before you (1950). ;-)

Peter

P.S. -- If it matters, my first CPU was an IBM 1620 (2nd generation
relays-n-transistors HW, variable-length decimal operations, BCD
(6-bit) character set). It also had the first disk drive I ever saw,
an IBM 1311 (specs unknown). The horde of relay clicks when it was
"computing" was pretty noisy, as I remember.


------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Chris Craft
2002-10-25 02:42:53 UTC
Permalink
Y'all're making me feel insecure in my youth! [1969] My first computer was a
Commodore VIC-20... didn't get to paw a '370 until college in 1987, running
MUSIC, then CMS. Got hooked on world-wide networking on the BITNET RELAY,
before Southern Illinois University at Carbondale finally got hooked up to
the internet in 1989.

Regards,
Chris, RetroComputing Nut
Post by Peter J Farley III
Who here is closest to my age? Bomber, that was a well written
message, (I can use your nickname here?).
Well, I'm not near you, but I am before you (1950). ;-)
Peter
P.S. -- If it matters, my first CPU was an IBM 1620 (2nd generation
relays-n-transistors HW, variable-length decimal operations, BCD
(6-bit) character set). It also had the first disk drive I ever saw,
an IBM 1311 (specs unknown). The horde of relay clicks when it was
"computing" was pretty noisy, as I remember.
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Todd Enders
2002-10-25 02:58:56 UTC
Permalink
Post by Chris Craft
Y'all're making me feel insecure in my youth! [1969] My first computer was a
Commodore VIC-20... didn't get to paw a '370 until college in 1987, running
MUSIC, then CMS. Got hooked on world-wide networking on the BITNET RELAY,
before Southern Illinois University at Carbondale finally got hooked up to
the internet in 1989.
Ah yes, BITNET... :-) There were some quite amazing things one could
do from the MVS batch queues on a machine hooked to BITNET. Like
running your batch jobs on a computer half a world away and getting the
output routed back to you. This was *real* handy when the campus
mainframe was backed up going into finals week. :-) I recall at least half
a dozen machines scattered about the globe that were more or less *wide*
open from the MVS batch side. Not sure anyone realised what sort of a
gaping security hole it was at the time, and at least a couple sites remained
wide open until we got off BITNET in the late 80s.

Todd


------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Ronald Tatum
2002-10-25 19:26:59 UTC
Permalink
Oh, Lord,
I taught myself Fortran II on a 1620 in 1963, also got introduced to ALC
(FAP, actually) about the same time.

Lucky you, Peter, you had a disk; all the stupid machine I used had was
a card reader and punch - print from punched decks on a 407 accounting
machine.

Horrible thing - I should have quit while I was ahead. I had a key to
the lab where the machine was installed, McCracken's book, the manuals on
the console and shelves, and a bunch of card files. Learned how to turn the
infernal machine on, how to load the first pass of the compiler, collect the
intermediate decks, load the second pass, get the next set of decks, load
the libraries (a term loosely used) and the program, and start over again
because of one tensy-weensy key punch error...
I wish someone had sat me down and had a long heart-to-heart, but apparently
there weren't any "someones" at that early time.

Still, I have made a living of sorts over the years, had some fun and
collected 'way too many war stories ;-).
- Ron T.
October 1, 1940 - certainly old enough to know better.
----- Original Message -----
From: "Peter J Farley III" <pjfarley3-/***@public.gmane.org>
To: <hercules-390-***@public.gmane.org>
Sent: Thursday, October 24, 2002 7:58 PM
Subject: [hercules-390] Re: Running a mainframe at home is a interesting IBM
needs to listen
Post by Peter J Farley III
Who here is closest to my age? Bomber, that was a well written
message, (I can use your nickname here?).
Well, I'm not near you, but I am before you (1950). ;-)
Peter
P.S. -- If it matters, my first CPU was an IBM 1620 (2nd generation
relays-n-transistors HW, variable-length decimal operations, BCD
(6-bit) character set). It also had the first disk drive I ever saw,
an IBM 1311 (specs unknown). The horde of relay clicks when it was
"computing" was pretty noisy, as I remember.
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Andy Kane
2002-10-26 03:48:37 UTC
Permalink
Hi Ron,

"Lucky to have a disk"?? You were lucky to have a MACHINE at your
disposal!! At the same time you were learning, I was learning - also
from Dan McCracken's book, was there any other way?

The company I was working for bought time (billed by the hundredth of
an hour) on a 7090 at a Service Bureau. The standard procedure was to
submit your card deck (and a billing sheet, of course) to the
scheduler, who was a human, not a program. Every two or three hours,
they would gather a BATCH of work together, go card-to-tape off line
on a 1401, and then run "batch" under a primitive monitor called FMS.
Output was two tapes - print and punch images - which went offline
for tape-to-print and tape-to-punch.

Time was billed cheaper at night, so since we were located some 30
hard-traffic miles away from the Service Bureau, it was company policy
to allow only one run a day, for which they provided messenger pickup
at 8 PM and delivery at 7 AM the next morning. So... that "one little
keypunch error" meant a full day lost. Not something bosses liked. As
a result, we spent quite a bit of time in an activity long relegated
to the scrap heap of history - "desk checking".

Afterthought 1: Ron, I have you beat by 66 days: July 27, 1940!

Afterthought 2: Does anyone have - or know where to find - a copy of
McCracken and Dorn's book "Numerical Methods in Fortran Programming"
1964 edition? I have a personal reason for wanting a copy, or if that
isn't possible, a copy of the Acknowledgements page.

Shalom from Tel Aviv. Andy
Post by Ronald Tatum
Oh, Lord,
I taught myself Fortran II on a 1620 in 1963, also got introduced to ALC
(FAP, actually) about the same time.
Lucky you, Peter, you had a disk; all the stupid machine I used had was
a card reader and punch - print from punched decks on a 407
accounting
Post by Ronald Tatum
machine.
Horrible thing - I should have quit while I was ahead. I had a key to
the lab where the machine was installed, McCracken's book, the
manuals on
Post by Ronald Tatum
the console and shelves, and a bunch of card files. Learned how to turn the
infernal machine on, how to load the first pass of the compiler, collect the
intermediate decks, load the second pass, get the next set of decks, load
the libraries (a term loosely used) and the program, and start over again
because of one tensy-weensy key punch error...
I wish someone had sat me down and had a long heart-to-heart, but apparently
there weren't any "someones" at that early time.
Still, I have made a living of sorts over the years, had some fun and
collected 'way too many war stories ;-).
- Ron T.
October 1, 1940 - certainly old enough to know better.
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Jay Maynard
2002-10-26 10:57:27 UTC
Permalink
Post by Andy Kane
Afterthought 1: Ron, I have you beat by 66 days: July 27, 1940!
You're exactly 20 years older than me, then...27 July 1960.

And, FWIW, my first real mainframe work was done under VM/370 CMS: using a
FORTRAN cross-assembler for the 8080. I was a microcomputer tech at the
time. When the shop went to straight MVS (they had been using VS/1 under
VM), I spent a lot of time pestering the systems guys on converting the
EXECs to CLISTs - and that got their attention when the junior systems
programmer left for greener pa$ture$. They asked me if I was interested, and
the rest is history.

------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
marysmiling2002
2002-10-26 19:41:36 UTC
Permalink
Here's an interesting thought:

It's long been a theory of mine that many old programs are more
reliable than newer programs due to the cost factors involved. For
example, it used to be cheaper to desk check your program deck than
to compile it and see what the compiler errors were, or to run it in
a debugger and try to figure out why it wasn't working.

I find a common method used by many young programmers nowadays is to
blast out some code as quickly as they can type, compile it and fix
errors until it compiles cleanly, and then start running it to see
if it works. There is little to no human validation of the code or
logic.

Even more sadly, this modern approach seems to discourage design, in
the sense that it is so cheap and easy to compile and run programs
now that it requires a lot of discipline on the part of the
programmer to do designs on paper and put in design validation
effort.

Of course, there are many good software engineers who still
carefully design, carefully code, and inspect their code, but the
economics involved used to force EVERYBODY to do that, and to do it
perhaps more thorougly than anybody does nowadays.

Research suggests that a human reader will find more than twice the
number of flaws in a program than a test run, and that's talking
about a human reader motivated only to try and improve the quality.
A human reader motivated by a day lost to every compiler error will
probably do even better than that.

--Dan
Post by Andy Kane
Hi Ron,
"Lucky to have a disk"?? You were lucky to have a MACHINE at your
disposal!! At the same time you were learning, I was learning - also
from Dan McCracken's book, was there any other way?
The company I was working for bought time (billed by the hundredth of
an hour) on a 7090 at a Service Bureau. The standard procedure was to
submit your card deck (and a billing sheet, of course) to the
scheduler, who was a human, not a program. Every two or three
hours,
Post by Andy Kane
they would gather a BATCH of work together, go card-to-tape off line
on a 1401, and then run "batch" under a primitive monitor called FMS.
Output was two tapes - print and punch images - which went offline
for tape-to-print and tape-to-punch.
Time was billed cheaper at night, so since we were located some 30
hard-traffic miles away from the Service Bureau, it was company policy
to allow only one run a day, for which they provided messenger
pickup
Post by Andy Kane
at 8 PM and delivery at 7 AM the next morning. So... that "one little
keypunch error" meant a full day lost. Not something bosses
liked. As
Post by Andy Kane
a result, we spent quite a bit of time in an activity long
relegated
Post by Andy Kane
to the scrap heap of history - "desk checking".
Afterthought 1: Ron, I have you beat by 66 days: July 27, 1940!
Afterthought 2: Does anyone have - or know where to find - a copy of
McCracken and Dorn's book "Numerical Methods in Fortran
Programming"
Post by Andy Kane
1964 edition? I have a personal reason for wanting a copy, or if that
isn't possible, a copy of the Acknowledgements page.
Shalom from Tel Aviv. Andy
Post by Ronald Tatum
Oh, Lord,
I taught myself Fortran II on a 1620 in 1963, also got
introduced
Post by Andy Kane
to ALC
Post by Ronald Tatum
(FAP, actually) about the same time.
Lucky you, Peter, you had a disk; all the stupid machine I
used
Post by Andy Kane
had was
Post by Ronald Tatum
a card reader and punch - print from punched decks on a 407
accounting
Post by Ronald Tatum
machine.
Horrible thing - I should have quit while I was ahead. I had
a
Post by Andy Kane
key to
Post by Ronald Tatum
the lab where the machine was installed, McCracken's book, the
manuals on
Post by Ronald Tatum
the console and shelves, and a bunch of card files. Learned how
to
Post by Andy Kane
turn the
Post by Ronald Tatum
infernal machine on, how to load the first pass of the compiler,
collect the
Post by Ronald Tatum
intermediate decks, load the second pass, get the next set of
decks,
Post by Andy Kane
load
Post by Ronald Tatum
the libraries (a term loosely used) and the program, and start
over
Post by Andy Kane
again
Post by Ronald Tatum
because of one tensy-weensy key punch error...
I wish someone had sat me down and had a long heart-to-heart, but
apparently
Post by Ronald Tatum
there weren't any "someones" at that early time.
Still, I have made a living of sorts over the years, had some
fun and
Post by Ronald Tatum
collected 'way too many war stories ;-).
- Ron T.
October 1, 1940 - certainly old enough to know better.
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Adam Thornton
2002-10-26 19:55:35 UTC
Permalink
Post by marysmiling2002
I find a common method used by many young programmers nowadays is to
blast out some code as quickly as they can type, compile it and fix
errors until it compiles cleanly, and then start running it to see
if it works. There is little to no human validation of the code or
logic.
Even more sadly, this modern approach seems to discourage design, in
the sense that it is so cheap and easy to compile and run programs
now that it requires a lot of discipline on the part of the
programmer to do designs on paper and put in design validation
effort.
I think you've just described "Extreme Programming."

Bah.

Kids these days.

Adam
--
adam-uX/***@public.gmane.org
"My eyes say their prayers to her / Sailors ring her bell / Like a moth
mistakes a light bulb / For the moon and goes to hell." -- Tom Waits

------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
marysmiling2002
2002-10-26 22:54:56 UTC
Permalink
Precisely!

Postmodernism, for all of its validity and generally good results,
cannot reverse mathematical realities. It merely seeks to question
whether they can really capture the complete essence of things.

All too often, it is used to excuse the human need for instant
gratification, even if discipline ultimately yields better results.

Come to think of it, most young people who disdain the mainframe
(and other truly great systems) as mere "legacy" (which sounds nice,
but is still meant to mean "obsolete"), do so on grounds of its
requiring significant effort before first results are achieved. Who
wants to read the manual anyway? Even less to have to learn several
different languages and command sets before I can start playing?

Once upon a time, we thought it was a privilege to be in this line
of work, and we were willing to pay due dilligence. Postmodern
consumerism has no place for the human side of the bargain; the
system has no right to expect anything of its users. It is a
product, and, if I don't like it immediately, I'll go find a better
one.

Yes, it all does make me feel like the comical, crotchety old
man... "In my day we used to walk 5 miles a day to school in subzero
weather every day... AND WE LIKED IT"

Regards,
--Dan
Post by Adam Thornton
Post by marysmiling2002
I find a common method used by many young programmers nowadays is to
blast out some code as quickly as they can type, compile it and fix
errors until it compiles cleanly, and then start running it to see
if it works. There is little to no human validation of the code or
logic.
Even more sadly, this modern approach seems to discourage
design, in
Post by Adam Thornton
Post by marysmiling2002
the sense that it is so cheap and easy to compile and run
programs
Post by Adam Thornton
Post by marysmiling2002
now that it requires a lot of discipline on the part of the
programmer to do designs on paper and put in design validation
effort.
I think you've just described "Extreme Programming."
Bah.
Kids these days.
Adam
--
"My eyes say their prayers to her / Sailors ring her bell / Like a moth
mistakes a light bulb / For the moon and goes to hell." -- Tom Waits
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Fish
2002-10-27 04:56:58 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Methinks you are "preaching to the choir", Dan. :)

- --
"Fish" (David B. Trout)
(a programmer who, like other PROFESSIONAL programmers (read:
"old-time mainframe programmers"), still desk-checks his code before
compiling it and who *continues* doing so even after it's been
thoroughly debugged and released into production)
-----Original Message-----
Sent: Saturday, October 26, 2002 3:55 pm
Subject: [hercules-390] Re: Running a mainframe at home is a
interesting IBM needs to listen
Precisely!
Postmodernism, for all of its validity and generally [...]
<snip previous>


-----BEGIN PGP SIGNATURE-----
Version: PGP 7.0.4

iQA/AwUBPbtyGkj11/TE7j4qEQL2JQCbB56fzACZafchdQDkuoFVYK6jd4MAoJmQ
eMWa/gC7G42KX37pvFUHDrDN
=NysE
-----END PGP SIGNATURE-----


------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Gregg C Levine
2002-10-27 05:11:15 UTC
Permalink
Hello from Gregg C Levine
I'll grok that, and raise you fifty. Man with gills, (Fish), you are
right. So is Dan. I can't tell you how many times I've programmed
straight from my thoughts, and goofed, and instead wrote down the code,
and didn't. Fish, e-mail me privately if that indirect reference to your
name, both real, and nickname is offending.
-------------------
Gregg C Levine hansolofalcon-XfrvlLN1Pqtfpb/***@public.gmane.org
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke."  Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Sunday, October 27, 2002 12:57 AM
Subject: RE: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Methinks you are "preaching to the choir", Dan. :)
- --
"Fish" (David B. Trout)
"old-time mainframe programmers"), still desk-checks his code before
compiling it and who *continues* doing so even after it's been
thoroughly debugged and released into production)
-----Original Message-----
Sent: Saturday, October 26, 2002 3:55 pm
Subject: [hercules-390] Re: Running a mainframe at home is a
interesting IBM needs to listen
Precisely!
Postmodernism, for all of its validity and generally [...]
<snip previous>
-----BEGIN PGP SIGNATURE-----
Version: PGP 7.0.4
iQA/AwUBPbtyGkj11/TE7j4qEQL2JQCbB56fzACZafchdQDkuoFVYK6jd4MAoJ
mQ
eMWa/gC7G42KX37pvFUHDrDN
=NysE
-----END PGP SIGNATURE-----
------------------------ Yahoo! Groups Sponsor
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Fish
2002-10-27 07:09:13 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Post by Gregg C Levine
Hello from Gregg C Levine
I'll grok that, and raise you fifty. Man with gills, (Fish),
you are right. So is Dan. I can't tell you how many times I've
programmed straight from my thoughts, and goofed, and instead
wrote down the code, and didn't.
Yep. It's sometimes amazing how much spending the time and effort
*beforehand* to come up with a good design on paper *first* (before
writing a single line of code) helps to: 1) produce quality code
(well written and documented and easy to maintain) that tends to work
right the first time it's written, and 2) ends up *saving* time over
the course of the project (i.e. not only with completing the project
on time but also with being able to quickly/easily make changes to
(i.e. maintain) the software involved over its lifetime).

At one company I worked at we spent several months developing the
plan (design document) for modifying our version of DOS/VS(E) to
support 4K paging in addition to the 2K paging it already supported.
(It was an older version of DOS/VSE obviously). Only after we knew
exactly what we needed to do and exactly how we wanted to do it (and
everywhere we needed to do it!) did we then sit down and actually do
it, and when we DID finally reach that point (where we were ready to
actually sit down and start making the coding changes), guess how
long it took us to finish the project and place it into production?

TWO WEEKS! (*less* actually!)

Took us about 4-5 days to do the coding changes and perform some unit
tests (with the batch utility programs mostly), and we were then
ready to begin system testing the following week. On Monday we built
our new test system with our changes and IPL'ed it. Crapped out
pretty early on because of a typo someone made. Fixed it within a few
minutes and then tried another IPL. Worked this time and the system
came all the way up and appeared to be functioning quite normally.
Did some heavy duty testing and found another small bug. Quickly
fixed it too and re-tested. Couldn't seem to break the system and
couldn't find any more bug, but decided to let the system run for the
rest of the week in "pseudo-production" mode just to be sure/safe.

On Monday of the following week all if our systems -- both production
and test -- were up and running on the new version. :)
Post by Gregg C Levine
Fish, e-mail me privately if that indirect reference
to your name, both real, and nickname is offending.
Eh?? WhatEVER are you talking about Gregg? "Man with gills"?

Shii.. er, ... Shoot, Gregg, that's not offensive in the least my
friend. :)

My nickname (or "professional moniker" as I like to call it) *IS*
"Fish"[1] after all. :)
- --
"Fish" (David B. Trout)
fish-6N/dkqvhA+***@public.gmane.org

[1] Been going by "Fish" both professionally and informally for, oh,
about 28 years now. :)


-----BEGIN PGP SIGNATURE-----
Version: PGP 7.0.4

iQA/AwUBPbuRGEj11/TE7j4qEQLnrACgtyTec1E4uZhTNnlmIpRjznjh6coAnjmG
PRtTQSaomVQ9htD9TI7T8dog
=p2Np
-----END PGP SIGNATURE-----


------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
John Alvord
2002-10-27 15:51:56 UTC
Permalink
Post by Adam Thornton
Post by marysmiling2002
I find a common method used by many young programmers nowadays is to
blast out some code as quickly as they can type, compile it and fix
errors until it compiles cleanly, and then start running it to see
if it works. There is little to no human validation of the code or
logic.
Even more sadly, this modern approach seems to discourage design, in
the sense that it is so cheap and easy to compile and run programs
now that it requires a lot of discipline on the part of the
programmer to do designs on paper and put in design validation
effort.
I think you've just described "Extreme Programming."
Bah.
Kids these days.
The factor you may not have considered fully is the dramatic reduction in
cost of computing. Compared to (say) 1970, the $ cost of human work has
gone up maybe 4 times and the cost of computation has been reduced by
(say) 10,000. The obvious compensating strategy is to lean on the
computing side - assuming that minimizing costs is the goal. If you are
after some other - purity/nostalgic - goal, then cost isn't part of the
equation...

john alvord


------------------------ Yahoo! Groups Sponsor ---------------------~-->
4 DVDs Free +s&p Join Now
http://us.click.yahoo.com/pt6YBB/NXiEAA/jd3IAA/W4wwlB/TM
---------------------------------------------------------------------~->
marysmiling2002
2002-10-27 20:38:28 UTC
Permalink
Hi John,

Not sure if I'm reading your post right.

Did you mean to say that it is less expensive to construct software
without designing it on paper first, or performing the other
validation steps in the classic process?

Or did you mean that the lower quality of modern software vs. older
software is less costly now than it used to be due to the fact that
computing resources are less expensive?

It seems to me there are two main themes to this discussion:

1. Economics:

Because of the change in cost factors, it is now much cheaper to
throw hardware at a problem than software. Because of that, highly
efficient systems like the S/360 architecture and its descendants
are no longer essential for most problems. Designs that are a lot
less clever suffice most of the time. Due to this change, companies
that make software are unwilling to expend resources making it
highly efficient, scalable, or whatever. Why should they?

On the other hand, it's no less expensive for the software to fail
than before. A simple application running on a million desktop
machines still costs hugely in lost data or productivity if it fails
routinely, due to the fact that it is failing on a millon machines
instead of just one (though the cost of each single failure may be
much, much lower than the cost of a single failure in the mainframe
days).

One cost has actually gone up since the PC revolution: support. It's
much more costly to configure, maintain, and support thousands of
desktop boxes than one mainframe. One of the costs of buggy software
is support, and it can be very high due to the fact that it must
be "fixed" over and over again, in many different physical
locations.

Due to support costs, trends in system design now favor centralizing
critical functions in client/server configurations. Once you move to
a single server serving thousands of users, we are right back to the
need for reliability, scalability, and availability we used to have,
only there is much more hardware available more cheaply now, so we
still don't need the kind of efficient use of hardware resources we
once did. At any rate, the place of the mainframe in the modern
world is more often as a server than as a host (with a few well
known exceptions). I would contend that it is still one of the best
server systems in existence, though its high cost of both
procurement and operation mean its only suitable for the large
enterprise. No surprise that this is exactly how IBM is positioning
it these days.

2. Methodology:

It's worth raising the question of whether the way we write software
now is better than the way we wrote software back when computing was
much more expensive.

Many people (myself included) casually (unscientifically) believe
that it is actually less expensive to follow a cycle of design, then
validation, then lower level design, then validation, and so forth
before coding, and then validating the code on paper before testing
begins. This lifecycle model is called the "waterfall" model by
process experts, and, due in part to the fact that it is older, it
is not considered to be a "cutting edge" lifecycle model.

Nonetheless, these basic realities still exist: Correcting a problem
in a high level design is nearly always much less expensive than
correcting the same problem in a lower level design. Correcting a
problem in a design is nearly always much less expensive than
correcting the problem in an implementation of that design.
Carefully considering high level options before taking a certain
direction can dramatically decrease the amount of work needed to
solve a problem, since some high-level design directions are much
more expensive than others.

All of these realitites would suggest that it is usually cheaper to
use a traditional design process than it is to just start coding and
then go through many iterations until the software system ends up in
its final state.

With respect to qualtiy, as opposed to cost, these realities also
exist: Research suggests that black box testing will find about 40%
of the flaws in a software system, whereas reading the code will
find about 90% of them. Furthermore, the bugs found in black box
testing will tend to be different kinds of bugs than those found in
inspection, meaning that using both methods makes it easy to
approach 100% of the bugs in the system. Also, carefully considered
high level design decisions lead to simpler solutions. Understanding
the design thoroughly before coding leads to better organized
implementations (well designed module boundaries, etc.), which not
only lowers maintenance cost, but also results in systems that are
both easier to validate, and, generally, more valid from the start.

All of these realities would suggest that software designed using
traditional methods would usually have higher quality than software
done using "Extreme Software Engineering" type lifecycle models.

Antecdotal evidence supports these observations. Many "old timers"
can tell stories about how teams performed what, by today's
standards, would be considered extraordinary feats of software
engineering very quickly, and with very high quality. This is
partially due to the methodologies, and partially due to the fact
that those older systems (e.g. IBM mainframes) provided much more
enlightened system interfaces and programming support than newer
systems. The latter is, of course, due to economic factors. When you
can charge millions for each system sold, a lot more resources can
be expended when designing it.

Still, since we think it's actually both better AND cheaper to
design things on paper, we should still be able to realize both
gains in development efficiency AND quality when designing and
implementing modern systems to run on modern system architectures.

For documentation supporting these contentions, see the books "Rapid
Development--taming wild software schedules," and "Code Complete,"
both by Steve McConnell, as well as "Writing Solid Code," by Steve
Maguire.

The main reason people like myself complain about modern practices
isn't because we're nostalgic, but because modern economic realities
(which are great, we all agree), have the unfortunate side effect of
not encouraging careful design as much as prior economic realities
did.

Regards,
--Dan
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
I find a common method used by many young programmers nowadays is to
blast out some code as quickly as they can type, compile it and fix
errors until it compiles cleanly, and then start running it to see
if it works. There is little to no human validation of the
code or
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
logic.
Even more sadly, this modern approach seems to discourage
design, in
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
the sense that it is so cheap and easy to compile and run
programs
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
now that it requires a lot of discipline on the part of the
programmer to do designs on paper and put in design validation
effort.
I think you've just described "Extreme Programming."
Bah.
Kids these days.
The factor you may not have considered fully is the dramatic
reduction in
Post by John Alvord
cost of computing. Compared to (say) 1970, the $ cost of human
work has
Post by John Alvord
gone up maybe 4 times and the cost of computation has been reduced by
(say) 10,000. The obvious compensating strategy is to lean on the
computing side - assuming that minimizing costs is the goal. If you are
after some other - purity/nostalgic - goal, then cost isn't part of the
equation...
john alvord
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
S. Vetter
2002-10-27 21:19:42 UTC
Permalink
Post by marysmiling2002
Hi John,
Not sure if I'm reading your post right.
Did you mean to say that it is less expensive to construct software
without designing it on paper first, or performing the other
validation steps in the classic process?
While it's not me that said it, I got the impression that the first part
was correct.
Post by marysmiling2002
Or did you mean that the lower quality of modern software vs. older
software is less costly now than it used to be due to the fact that
computing resources are less expensive?
Because of the change in cost factors, it is now much cheaper to
throw hardware at a problem than software. Because of that, highly
efficient systems like the S/360 architecture and its descendants
are no longer essential for most problems. Designs that are a lot
less clever suffice most of the time. Due to this change, companies
that make software are unwilling to expend resources making it
highly efficient, scalable, or whatever. Why should they?
I would not say that they are no longer essential is incorrect. If that
was true there would be no more mainframes. True, the rules seems to be
throw hardware against the problem. I have seen where some companies would
rather throw in hardware than to force programmers to write more efficient
code. As a matter of fact, it was discouraged to even to offer to tell them
how to do so.

As for why should companies write more efficient code, I have to agree
with you. However, when you get into a multi-user environment, it is more
critical, unless again, you are willing to throw more hardware into it.

Seeing how MS's Outlook Express 6 runs on Windows XP as opposed to
Netscape Communicator on OS/2, it's like night and day even though XP runs on
a faster processor. And for what? More features that I'll rarely use? OS/2
even boot's up faster. And I am talking about a single user environment.
Give me faster and cleaner code any day...
Post by marysmiling2002
On the other hand, it's no less expensive for the software to fail
than before. A simple application running on a million desktop
machines still costs hugely in lost data or productivity if it fails
routinely, due to the fact that it is failing on a millon machines
instead of just one (though the cost of each single failure may be
much, much lower than the cost of a single failure in the mainframe
days).
Before you had less people relying on software. As more an more people
and companies become computer bound and the cost of humans becoming more
expensive, sure it costs more... Mainframes are a bit more reliable and is
less subject to viruses.
Post by marysmiling2002
One cost has actually gone up since the PC revolution: support. It's
much more costly to configure, maintain, and support thousands of
desktop boxes than one mainframe. One of the costs of buggy software
is support, and it can be very high due to the fact that it must
be "fixed" over and over again, in many different physical
locations.
You got it! A study in Computerworld stated as much. One of the things
companies don't see is that: You update an entire department's machines and
software. And then another and then another. Now the conversion takes years
as compared to the mainframe which may take a couple of months.
Post by marysmiling2002
Due to support costs, trends in system design now favor centralizing
critical functions in client/server configurations.
[Sounds like mainframes again]
Post by marysmiling2002
Once you move to
a single server serving thousands of users, we are right back to the
need for reliability, scalability, and availability we used to have,
only there is much more hardware available more cheaply now, so we
still don't need the kind of efficient use of hardware resources we
once did.
[In my opinion, we still do... You can't keep throwing hardware at a
problem when the problem is software]
Post by marysmiling2002
At any rate, the place of the mainframe in the modern
world is more often as a server than as a host (with a few well
known exceptions). I would contend that it is still one of the best
server systems in existence, though its high cost of both
procurement and operation mean its only suitable for the large
enterprise. No surprise that this is exactly how IBM is positioning
it these days.
The cost of supporting a mainframe is diminishing as: you no longer need
a water chiller, don't need acres of floor space, the power consumption is
decreasing.
Post by marysmiling2002
It's worth raising the question of whether the way we write software
now is better than the way we wrote software back when computing was
much more expensive.
Many people (myself included) casually (unscientifically) believe
that it is actually less expensive to follow a cycle of design, then
validation, then lower level design, then validation, and so forth
before coding, and then validating the code on paper before testing
begins. This lifecycle model is called the "waterfall" model by
process experts, and, due in part to the fact that it is older, it
is not considered to be a "cutting edge" lifecycle model.
Nonetheless, these basic realities still exist: Correcting a problem
in a high level design is nearly always much less expensive than
correcting the same problem in a lower level design. Correcting a
problem in a design is nearly always much less expensive than
correcting the problem in an implementation of that design.
Carefully considering high level options before taking a certain
direction can dramatically decrease the amount of work needed to
solve a problem, since some high-level design directions are much
more expensive than others.
All of these realitites would suggest that it is usually cheaper to
use a traditional design process than it is to just start coding and
then go through many iterations until the software system ends up in
its final state.
With respect to qualtiy, as opposed to cost, these realities also
exist: Research suggests that black box testing will find about 40%
of the flaws in a software system, whereas reading the code will
find about 90% of them. Furthermore, the bugs found in black box
testing will tend to be different kinds of bugs than those found in
inspection, meaning that using both methods makes it easy to
approach 100% of the bugs in the system. Also, carefully considered
high level design decisions lead to simpler solutions. Understanding
the design thoroughly before coding leads to better organized
implementations (well designed module boundaries, etc.), which not
only lowers maintenance cost, but also results in systems that are
both easier to validate, and, generally, more valid from the start.
All of these realities would suggest that software designed using
traditional methods would usually have higher quality than software
done using "Extreme Software Engineering" type lifecycle models.
Antecdotal evidence supports these observations. Many "old timers"
can tell stories about how teams performed what, by today's
standards, would be considered extraordinary feats of software
engineering very quickly, and with very high quality. This is
partially due to the methodologies, and partially due to the fact
that those older systems (e.g. IBM mainframes) provided much more
enlightened system interfaces and programming support than newer
systems. The latter is, of course, due to economic factors. When you
can charge millions for each system sold, a lot more resources can
be expended when designing it.
Still, since we think it's actually both better AND cheaper to
design things on paper, we should still be able to realize both
gains in development efficiency AND quality when designing and
implementing modern systems to run on modern system architectures.
For documentation supporting these contentions, see the books "Rapid
Development--taming wild software schedules," and "Code Complete,"
both by Steve McConnell, as well as "Writing Solid Code," by Steve
Maguire.
The main reason people like myself complain about modern practices
isn't because we're nostalgic, but because modern economic realities
(which are great, we all agree), have the unfortunate side effect of
not encouraging careful design as much as prior economic realities
did.
Agreed... The place to appropriately nip it in the bud, is at schools where
programming is taught. But alas, in a recent visit back to the high school
and visiting the computer teacher stated "We don't teach programming any
more, but teach how to use packaged software instead." So what happened when
this packaged software cannot provide the solution that is desired, I asked.
The look - priceless... (after all someone has to write software to generate
these packages).

Scott



------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
marysmiling2002
2002-10-27 23:07:24 UTC
Permalink
Hi Scott,
From your responses, I'd say you and I are basically on the same
page. I agree with what you are saying.

I wanted to offer a couple of clarifications to my prior post.

With respect to mainframes being essential, I agree with you. They
remain relevant and important for a certain class of problems. What
I meant to say was that they are no longer essential for everything.
There are now a lot of things that can be done at lower cost using
less sophisticated systems.

With respect to not needing efficency, I agree with you there too.
Efficient software is still important. My point was that there was
once a time when it was impossible to solve nontrivial problems
without paying a lot of attention to efficiency. With the new
economic situation, it is possible to do things less efficiently but
still have something that works, because we have the hardware
resources available. It's a shame we aren't taking full advantage of
the hardware we have, but I only meant to say that now it's
possible, whereas in the past efficiency was absolutely necessary to
even have a workable solution.

With respect to the decreasing cost of running mainframes, I think
that's the crux of this whole thread, as suggested by its subject.

The decreasing cost of hardware has had two effects:

1. It's easier to write programs that do complicated things, can run
most anywhere, and aren't very carefully designed for efficiency.

2. Existing elaborate, efficent software systems like MVS can now be
run on very inexpensive hardware.

In other words, there is a philosophical question:

As hardware gets cheaper, do we get systems that do more and more
things per hardware dollar spent (conversely, large, elaborate,
multiuser systems get cheaper and cheaper), or do we simply spend
less and less money on software efficiency to do the same thing as
before (i.e. the low cost of hardware is used to offset the cost of
software)?

I think that question has caused me angst for the last six or seven
years. It's very complicated and difficult to process.
From the standpoint of ethics, it seems to me that the low cost of
hardware should be passed on to the consumer. In other words,
systems should work the way they always did, but they should be
cheaper. If my PC has the CPU power that a large mainframe once had,
I should be able to do what was once done with a large mainframe
(ignoring for the moment the other complexities like I/O channels).
To me, it seems somehow "wrong" that inefficient software
engineering should eat up all of the additional machine resources
available due to reduced cost. In other words, my PC today has about
20 times the power of my PC in 1994, but I still do about the same
things with it. It is nowhere near 20 times as capable of a machine.

On the other hand, there are market realities. If I run a software
company, and we try to make everything as efficient as we would have
made it 30 years ago, our competitors will get their products to
market quicker and cheaper, and most customers probably won't notice
the difference, because the average PC owner has no reason to want
to run a multiuser system for transaction processing that is shared
among hundreds of users.

There are those that say the new hardware resources should be eaten
up in usability. For example, it takes perhaps hundreds of times
more machine resources to support an elaborate graphical user
interface than a character based one. If you try to convince most
users in your company that they should go back to the UI of CICS so
that the company can run cheaper than before, there will probably be
a revolt. From the standpoint of a lot of users, the company's IT
cost should remain fixed, and the users should get more and more
capability at the desktop, with greater ease of use and less
technical knowledge required. There is no question that less
training is required on modern systems than in the old days due to
significant usability gains, which come sometimes at great cost in
efficiency.

My favorite example is the stream file vs. the record file. A record
oriented system can run probably several orders of magnitude more
efficiently. Fixed length character fields are MUCH simpler and more
efficient to process than null-terminated strings of arbitrary
length. Dynamic allocation accounts for a HUGE amount of overhead in
modern systems, and it was completely absent from the essential
mainframe system. Yet the mainframe was more capable. The difference
is mainly usability. Users REALLY like not having fixed sized fields
for everything, even if it costs 100 times as much in machine
resources and complexity to do that.

After literally years of thought, I have come to this place:

Systems should be divided into client systems and server systems.

Usability issues and such should be solved in client systems. End
users are happy. Windows and MacOS are really good at this sort of
thing--arguably much better than, say, UNIX. The fact that hardware
is cheaper can (and does) mean an easy, rich user experience at the
client. After all, a PC can be had for about the same price as was
once paid for a terminal, so the PC (and the web browser and other
client programs) is the new terminal, right?

To solve the problem of support costs and so forth, the actual logic
of information systems should not run on the client. In fact, I
would argue that most client software should be standard software
(like the web browser) rather than application specific. So, the
server takes the place of the old mainframe. But there are some
differences:

With hardware a lot cheaper, we can have a different server for each
purpose. Separate web servers, database servers, mail (or groupware)
servers, and so on and so forth. Networking also makes this a
reality. Remember nothing like heterogeneous networks in the old
days.

With more servers, most single servers don't have to scale to the
same heights as they used to, so the trade offs can be made
differently. For example, there are systems like Win2K server and
UNIX that are easier to learn than MVS. If customers want a system
that is easier vs. one that is more efficient, they can make that
choice. If they'd rather put more burden on the system programmer,
and, in return, have a system that is MUCH more robust and scalable,
they can choose that way too.

That's all great, but there are still some problems:

Software costs for z/OS and OS/390 are extremely high compared to
any other system (with OS/400 a close second). Using FLEX (or even
Hercules), I should be able to decide I want to take on the cost in
human terms of z/OS, and choose that as my server platform, but IBM
has priced it out of reach of everyone but huge corporations, and,
they seem to want to conceal the fact that it will run on
inexpensive hardware. This seems like "the great lie" (with no
malice intended toward IBM).

Most people think that the mainframe is enormously scalable because
the hardware is so big. They don't realize that the system
architecture is what makes it scalable. You can run that system
architecture on smaller systems and it's still really scalable. Yes,
zSeries boxes range up to the thousands of MIPS, and no PC or UNIX
box can achieve that, but you can still do a LOT of work on a 60 MIP
box using simple PC hardware to emulate the mainframe hardware
architecture, and that seems to be a somewhat well-guarded secret.

IBM is able to demand a premium for the software because of a
combination of:

a) Everyone thinks it's the hardware they are paying for.
b) Nobody else has succeeded in creating a compatitive platform for
the user that wants efficiency at the cost of some of the usability
features.
c) IBM hasn't milked it for all it's worth yet, so they still feel
there is something to be gained by keeping the OS out of reach of
everyone except those who have the money for mainframe hardware.

This is a sad situation. I hope that, by working with Hercules, we
can "educate the masses" to the idea that there are other kinds of
systems besides UNIX and Windows for servers, and eliminate the fear
that people seem to have of the mainframe. Even computer
professionals seem to fear the mainframe. No, it's not what I would
give my mother to run on her PC, but a competent IT professional
should have the mainframe as an option when choosing a server
platform, especially for database type work. We need to get it out
of the realm of "arcane black art" and into the mainstream, and
Hercules with MVS 3.8 could accomplish that. As the market changes,
IBM changes. If they see a profitable mass market for z/OS running
on inexpensive hardware, they'll be all over it. Our job is to
create that market for them.

IBM is big into hardware convergence, meaning a large number of OS
platforms are available for a small number of hardware platforms.
This gives the consumer more choices, while IBM can satisfy
customers with fewer hardware SKUs. They are already converging the
AS/400 and RS-6000 (iSeries and pSeries) platforms into a single
hardware box that supports LPARs and can run Linux. When that's
done, you can buy a single server, partition it, and run OS/400,
AIX, and Linux on different partitions.

It is also pretty likely that Microsoft will jump onto that
bandwagon and get back into the business of Windows (Win64) for
PowerPC, so that adds Windows to the list of choices, and you can
have them all at once with LPARs.

There is still a lot of resistence to bringing the mainframe into
that fold. I don't know for sure, but I am guessing that the zSeries
group is somewhat elitist (from what I have heard). They want to be
the biggest, the best, the most expensive. But IBM is already beyond
the point of segmenting the market by size (small, midrange,
mainframe). They are at the point of wanting to say that you have
all of these different system choices, and you can scale each of
them up and down with various hardware configurations. The S/390
architecture lends itself to that idea very well, but they have to
get beyond the huge obstacle of fear that they will be slaughtering
a giant cash cow if they do that. Eventually it will probably
happen, and, I think we can do our part to help it along.

Whew!

I never intended this post to be so long. I guess I had a lot to say.

--Dan
Post by marysmiling2002
Hi John,
Not sure if I'm reading your post right.
Did you mean to say that it is less expensive to construct
software
Post by marysmiling2002
without designing it on paper first, or performing the other
validation steps in the classic process?
While it's not me that said it, I got the impression that the first part
was correct.
Post by marysmiling2002
Or did you mean that the lower quality of modern software vs. older
software is less costly now than it used to be due to the fact that
computing resources are less expensive?
Because of the change in cost factors, it is now much cheaper to
throw hardware at a problem than software. Because of that,
highly
Post by marysmiling2002
efficient systems like the S/360 architecture and its descendants
are no longer essential for most problems. Designs that are a lot
less clever suffice most of the time. Due to this change,
companies
Post by marysmiling2002
that make software are unwilling to expend resources making it
highly efficient, scalable, or whatever. Why should they?
I would not say that they are no longer essential is
incorrect. If that
was true there would be no more mainframes. True, the rules seems to be
throw hardware against the problem. I have seen where some
companies would
rather throw in hardware than to force programmers to write more efficient
code. As a matter of fact, it was discouraged to even to offer to tell them
how to do so.
As for why should companies write more efficient code, I have to agree
with you. However, when you get into a multi-user environment, it is more
critical, unless again, you are willing to throw more hardware
into it.
Seeing how MS's Outlook Express 6 runs on Windows XP as opposed to
Netscape Communicator on OS/2, it's like night and day even though XP runs on
a faster processor. And for what? More features that I'll rarely use? OS/2
even boot's up faster. And I am talking about a single user
environment.
Give me faster and cleaner code any day...
Post by marysmiling2002
On the other hand, it's no less expensive for the software to fail
than before. A simple application running on a million desktop
machines still costs hugely in lost data or productivity if it fails
routinely, due to the fact that it is failing on a millon
machines
Post by marysmiling2002
instead of just one (though the cost of each single failure may be
much, much lower than the cost of a single failure in the
mainframe
Post by marysmiling2002
days).
Before you had less people relying on software. As more an
more people
and companies become computer bound and the cost of humans
becoming more
expensive, sure it costs more... Mainframes are a bit more
reliable and is
less subject to viruses.
Post by marysmiling2002
One cost has actually gone up since the PC revolution: support. It's
much more costly to configure, maintain, and support thousands of
desktop boxes than one mainframe. One of the costs of buggy
software
Post by marysmiling2002
is support, and it can be very high due to the fact that it must
be "fixed" over and over again, in many different physical
locations.
You got it! A study in Computerworld stated as much. One of the things
companies don't see is that: You update an entire department's
machines and
software. And then another and then another. Now the conversion takes years
as compared to the mainframe which may take a couple of months.
Post by marysmiling2002
Due to support costs, trends in system design now favor
centralizing
Post by marysmiling2002
critical functions in client/server configurations.
[Sounds like mainframes again]
Post by marysmiling2002
Once you move to
a single server serving thousands of users, we are right back to the
need for reliability, scalability, and availability we used to have,
only there is much more hardware available more cheaply now, so we
still don't need the kind of efficient use of hardware resources we
once did.
[In my opinion, we still do... You can't keep throwing
hardware at a
problem when the problem is software]
Post by marysmiling2002
At any rate, the place of the mainframe in the modern
world is more often as a server than as a host (with a few well
known exceptions). I would contend that it is still one of the best
server systems in existence, though its high cost of both
procurement and operation mean its only suitable for the large
enterprise. No surprise that this is exactly how IBM is
positioning
Post by marysmiling2002
it these days.
The cost of supporting a mainframe is diminishing as: you no longer need
a water chiller, don't need acres of floor space, the power
consumption is
decreasing.
Post by marysmiling2002
It's worth raising the question of whether the way we write
software
Post by marysmiling2002
now is better than the way we wrote software back when computing was
much more expensive.
Many people (myself included) casually (unscientifically) believe
that it is actually less expensive to follow a cycle of design, then
validation, then lower level design, then validation, and so
forth
Post by marysmiling2002
before coding, and then validating the code on paper before
testing
Post by marysmiling2002
begins. This lifecycle model is called the "waterfall" model by
process experts, and, due in part to the fact that it is older, it
is not considered to be a "cutting edge" lifecycle model.
Nonetheless, these basic realities still exist: Correcting a
problem
Post by marysmiling2002
in a high level design is nearly always much less expensive than
correcting the same problem in a lower level design. Correcting a
problem in a design is nearly always much less expensive than
correcting the problem in an implementation of that design.
Carefully considering high level options before taking a certain
direction can dramatically decrease the amount of work needed to
solve a problem, since some high-level design directions are much
more expensive than others.
All of these realitites would suggest that it is usually cheaper to
use a traditional design process than it is to just start coding and
then go through many iterations until the software system ends up in
its final state.
With respect to qualtiy, as opposed to cost, these realities also
exist: Research suggests that black box testing will find about 40%
of the flaws in a software system, whereas reading the code will
find about 90% of them. Furthermore, the bugs found in black box
testing will tend to be different kinds of bugs than those found in
inspection, meaning that using both methods makes it easy to
approach 100% of the bugs in the system. Also, carefully
considered
Post by marysmiling2002
high level design decisions lead to simpler solutions.
Understanding
Post by marysmiling2002
the design thoroughly before coding leads to better organized
implementations (well designed module boundaries, etc.), which not
only lowers maintenance cost, but also results in systems that are
both easier to validate, and, generally, more valid from the
start.
Post by marysmiling2002
All of these realities would suggest that software designed using
traditional methods would usually have higher quality than
software
Post by marysmiling2002
done using "Extreme Software Engineering" type lifecycle models.
Antecdotal evidence supports these observations. Many "old
timers"
Post by marysmiling2002
can tell stories about how teams performed what, by today's
standards, would be considered extraordinary feats of software
engineering very quickly, and with very high quality. This is
partially due to the methodologies, and partially due to the fact
that those older systems (e.g. IBM mainframes) provided much more
enlightened system interfaces and programming support than newer
systems. The latter is, of course, due to economic factors. When you
can charge millions for each system sold, a lot more resources can
be expended when designing it.
Still, since we think it's actually both better AND cheaper to
design things on paper, we should still be able to realize both
gains in development efficiency AND quality when designing and
implementing modern systems to run on modern system
architectures.
Post by marysmiling2002
For documentation supporting these contentions, see the
books "Rapid
Post by marysmiling2002
Development--taming wild software schedules," and "Code
Complete,"
Post by marysmiling2002
both by Steve McConnell, as well as "Writing Solid Code," by
Steve
Post by marysmiling2002
Maguire.
The main reason people like myself complain about modern
practices
Post by marysmiling2002
isn't because we're nostalgic, but because modern economic
realities
Post by marysmiling2002
(which are great, we all agree), have the unfortunate side
effect of
Post by marysmiling2002
not encouraging careful design as much as prior economic
realities
Post by marysmiling2002
did.
Agreed... The place to appropriately nip it in the bud, is at
schools where
programming is taught. But alas, in a recent visit back to the high school
and visiting the computer teacher stated "We don't teach
programming any
more, but teach how to use packaged software instead." So what happened when
this packaged software cannot provide the solution that is
desired, I asked.
The look - priceless... (after all someone has to write software to generate
these packages).
Scott
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
sunny
2002-10-27 23:21:11 UTC
Permalink
all of this, are great, some how, there are a few
issues, people here probably have not address to;

(1) running an OS like MVT/MVS on home PC has to have
objectives, like those applications we use to have
(2) and while these, objectives, like applications,
that has to have some stabilities,
(3) securities, as in racf and such, so far, this
has not been addressed, except we all know, that
SAF was not "puiblically" available until MVS/SP
(4) which is some training institutions have to be
established, sort of like the infra-structure like
SHARE and GUIDE, these may even exist, but how
many
are well known to those new "users" that will
move away from os such as those from MS???

so, keep going guys, this group is doing really well;


sonnen
Post by marysmiling2002
Post by marysmiling2002
Hi John,
Not sure if I'm reading your post right.
Did you mean to say that it is less expensive to
construct software
Post by marysmiling2002
without designing it on paper first, or performing
the other
Post by marysmiling2002
validation steps in the classic process?
While it's not me that said it, I got the
impression that the first part
was correct.
Post by marysmiling2002
Or did you mean that the lower quality of modern
software vs. older
Post by marysmiling2002
software is less costly now than it used to be due
to the fact that
Post by marysmiling2002
computing resources are less expensive?
It seems to me there are two main themes to this
Because of the change in cost factors, it is now
much cheaper to
Post by marysmiling2002
throw hardware at a problem than software. Because
of that, highly
Post by marysmiling2002
efficient systems like the S/360 architecture and
its descendants
Post by marysmiling2002
are no longer essential for most problems. Designs
that are a lot
Post by marysmiling2002
less clever suffice most of the time. Due to this
change, companies
Post by marysmiling2002
that make software are unwilling to expend
resources making it
Post by marysmiling2002
highly efficient, scalable, or whatever. Why
should they?
I would not say that they are no longer essential
is incorrect. If that
was true there would be no more mainframes. True,
the rules seems to be
throw hardware against the problem. I have seen
where some companies would
rather throw in hardware than to force programmers
to write more efficient
code. As a matter of fact, it was discouraged to
even to offer to tell them
how to do so.
As for why should companies write more efficient
code, I have to agree
with you. However, when you get into a multi-user
environment, it is more
critical, unless again, you are willing to throw
more hardware into it.
Seeing how MS's Outlook Express 6 runs on Windows
XP as opposed to
Netscape Communicator on OS/2, it's like night and
day even though XP runs on
a faster processor. And for what? More features
that I'll rarely use? OS/2
even boot's up faster. And I am talking about a
single user environment.
Give me faster and cleaner code any day...
Post by marysmiling2002
On the other hand, it's no less expensive for the
software to fail
Post by marysmiling2002
than before. A simple application running on a
million desktop
Post by marysmiling2002
machines still costs hugely in lost data or
productivity if it fails
Post by marysmiling2002
routinely, due to the fact that it is failing on a
millon machines
Post by marysmiling2002
instead of just one (though the cost of each
single failure may be
Post by marysmiling2002
much, much lower than the cost of a single failure
in the mainframe
Post by marysmiling2002
days).
Before you had less people relying on software.
As more an more people
and companies become computer bound and the cost of
humans becoming more
expensive, sure it costs more... Mainframes are a
bit more reliable and is
less subject to viruses.
Post by marysmiling2002
One cost has actually gone up since the PC
revolution: support. It's
Post by marysmiling2002
much more costly to configure, maintain, and
support thousands of
Post by marysmiling2002
desktop boxes than one mainframe. One of the costs
of buggy software
Post by marysmiling2002
is support, and it can be very high due to the
fact that it must
Post by marysmiling2002
be "fixed" over and over again, in many different
physical
Post by marysmiling2002
locations.
You got it! A study in Computerworld stated as
much. One of the things
companies don't see is that: You update an entire
department's machines and
software. And then another and then another. Now
the conversion takes years
as compared to the mainframe which may take a couple
of months.
Post by marysmiling2002
Due to support costs, trends in system design now
favor centralizing
Post by marysmiling2002
critical functions in client/server
configurations.
[Sounds like mainframes again]
Post by marysmiling2002
Once you move to
a single server serving thousands of users, we are
right back to the
Post by marysmiling2002
need for reliability, scalability, and
availability we used to have,
Post by marysmiling2002
only there is much more hardware available more
cheaply now, so we
Post by marysmiling2002
still don't need the kind of efficient use of
hardware resources we
Post by marysmiling2002
once did.
[In my opinion, we still do... You can't keep
throwing hardware at a
problem when the problem is software]
Post by marysmiling2002
At any rate, the place of the mainframe in the
modern
Post by marysmiling2002
world is more often as a server than as a host
(with a few well
Post by marysmiling2002
known exceptions). I would contend that it is
still one of the best
Post by marysmiling2002
server systems in existence, though its high cost
of both
Post by marysmiling2002
procurement and operation mean its only suitable
for the large
Post by marysmiling2002
enterprise. No surprise that this is exactly how
IBM is positioning
Post by marysmiling2002
it these days.
The cost of supporting a mainframe is diminishing
as: you no longer need
a water chiller, don't need acres of floor space,
the power consumption is
decreasing.
Post by marysmiling2002
It's worth raising the question of whether the way
we write software
Post by marysmiling2002
now is better than the way we wrote software back
when computing was
Post by marysmiling2002
much more expensive.
Many people (myself included) casually
(unscientifically) believe
Post by marysmiling2002
that it is actually less expensive to follow a
cycle of design, then
Post by marysmiling2002
validation, then lower level design, then
validation, and so forth
Post by marysmiling2002
before coding, and then validating the code on
paper before testing
Post by marysmiling2002
begins. This lifecycle model is called the
"waterfall" model by
Post by marysmiling2002
process experts, and, due in part to the fact that
it is older, it
Post by marysmiling2002
is not considered to be a "cutting edge" lifecycle
model.
Correcting a problem
Post by marysmiling2002
in a high level design is nearly always much less
expensive than
Post by marysmiling2002
correcting the same problem in a lower level
design. Correcting a
Post by marysmiling2002
problem in a design is nearly always much less
expensive than
Post by marysmiling2002
correcting the problem in an implementation of
that design.
=== message truncated ===


__________________________________________________
Do you Yahoo!?
Y! Web Hosting - Let the expert host your web site
http://webhosting.yahoo.com/

------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
S. Vetter
2002-10-28 00:04:14 UTC
Permalink
On item #3, there is a company in Europe that was advertising some security
product that was said to work on MVS 3.8. Never tried it. The e-mail I
received is given below.

Scott

------------
Post by sunny
all of this, are great, some how, there are a few
issues, people here probably have not address to;
(1) running an OS like MVT/MVS on home PC has to have
objectives, like those applications we use to have
(2) and while these, objectives, like applications,
that has to have some stabilities,
(3) securities, as in racf and such, so far, this
has not been addressed, except we all know, that
SAF was not "puiblically" available until MVS/SP
(4) which is some training institutions have to be
established, sort of like the infra-structure like
SHARE and GUIDE, these may even exist, but how
many
are well known to those new "users" that will
move away from os such as those from MS???
so, keep going guys, this group is doing really well;
sonnen
Hi all,

If anybody is interested in using security product
(like RACF but not RACF)
on MVS 3.8
please contact me directly,

we adapt our software for this enviroment
anybody can receive license for HERCULES/MVS3.8 enviroment
free of charge

regards

Gregor Plucinski
S/390 System Consultant
MAINFRAME Co Ltd http://www.mainframe.mainnet.com.pl/
POLAND, Warsaw
MAINFRAME founded in 1989 it's MVS and OS390 consulting and software
production company
main areas is security and database software
Internet : www.grzes.com




------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
tom balabanov
2002-10-28 04:20:00 UTC
Permalink
one should realize that most of the modern software companies don't have to go through all the exhaustive testing. This expense is passed on to the consumer.
Microsoft et al. are pushing code out ,getting the major bugs out and relying on alpha and beta sites to get the real bugs out,
that I think is why they don't spend as much time up front on the design, they don't have to pay for the debugging so why should they rigoriously design.
That is why NASA spends so much up front,and they don't use leading edge software
----- Original Message -----
From: marysmiling2002
To: hercules-390-***@public.gmane.org
Sent: Sunday, October 27, 2002 12:38 PM
Subject: [hercules-390] Re: Running a mainframe at home is a interesting IBM needs to listen


Hi John,

Not sure if I'm reading your post right.

Did you mean to say that it is less expensive to construct software
without designing it on paper first, or performing the other
validation steps in the classic process?

Or did you mean that the lower quality of modern software vs. older
software is less costly now than it used to be due to the fact that
computing resources are less expensive?

It seems to me there are two main themes to this discussion:

1. Economics:

Because of the change in cost factors, it is now much cheaper to
throw hardware at a problem than software. Because of that, highly
efficient systems like the S/360 architecture and its descendants
are no longer essential for most problems. Designs that are a lot
less clever suffice most of the time. Due to this change, companies
that make software are unwilling to expend resources making it
highly efficient, scalable, or whatever. Why should they?

On the other hand, it's no less expensive for the software to fail
than before. A simple application running on a million desktop
machines still costs hugely in lost data or productivity if it fails
routinely, due to the fact that it is failing on a millon machines
instead of just one (though the cost of each single failure may be
much, much lower than the cost of a single failure in the mainframe
days).

One cost has actually gone up since the PC revolution: support. It's
much more costly to configure, maintain, and support thousands of
desktop boxes than one mainframe. One of the costs of buggy software
is support, and it can be very high due to the fact that it must
be "fixed" over and over again, in many different physical
locations.

Due to support costs, trends in system design now favor centralizing
critical functions in client/server configurations. Once you move to
a single server serving thousands of users, we are right back to the
need for reliability, scalability, and availability we used to have,
only there is much more hardware available more cheaply now, so we
still don't need the kind of efficient use of hardware resources we
once did. At any rate, the place of the mainframe in the modern
world is more often as a server than as a host (with a few well
known exceptions). I would contend that it is still one of the best
server systems in existence, though its high cost of both
procurement and operation mean its only suitable for the large
enterprise. No surprise that this is exactly how IBM is positioning
it these days.

2. Methodology:

It's worth raising the question of whether the way we write software
now is better than the way we wrote software back when computing was
much more expensive.

Many people (myself included) casually (unscientifically) believe
that it is actually less expensive to follow a cycle of design, then
validation, then lower level design, then validation, and so forth
before coding, and then validating the code on paper before testing
begins. This lifecycle model is called the "waterfall" model by
process experts, and, due in part to the fact that it is older, it
is not considered to be a "cutting edge" lifecycle model.

Nonetheless, these basic realities still exist: Correcting a problem
in a high level design is nearly always much less expensive than
correcting the same problem in a lower level design. Correcting a
problem in a design is nearly always much less expensive than
correcting the problem in an implementation of that design.
Carefully considering high level options before taking a certain
direction can dramatically decrease the amount of work needed to
solve a problem, since some high-level design directions are much
more expensive than others.

All of these realitites would suggest that it is usually cheaper to
use a traditional design process than it is to just start coding and
then go through many iterations until the software system ends up in
its final state.

With respect to qualtiy, as opposed to cost, these realities also
exist: Research suggests that black box testing will find about 40%
of the flaws in a software system, whereas reading the code will
find about 90% of them. Furthermore, the bugs found in black box
testing will tend to be different kinds of bugs than those found in
inspection, meaning that using both methods makes it easy to
approach 100% of the bugs in the system. Also, carefully considered
high level design decisions lead to simpler solutions. Understanding
the design thoroughly before coding leads to better organized
implementations (well designed module boundaries, etc.), which not
only lowers maintenance cost, but also results in systems that are
both easier to validate, and, generally, more valid from the start.

All of these realities would suggest that software designed using
traditional methods would usually have higher quality than software
done using "Extreme Software Engineering" type lifecycle models.

Antecdotal evidence supports these observations. Many "old timers"
can tell stories about how teams performed what, by today's
standards, would be considered extraordinary feats of software
engineering very quickly, and with very high quality. This is
partially due to the methodologies, and partially due to the fact
that those older systems (e.g. IBM mainframes) provided much more
enlightened system interfaces and programming support than newer
systems. The latter is, of course, due to economic factors. When you
can charge millions for each system sold, a lot more resources can
be expended when designing it.

Still, since we think it's actually both better AND cheaper to
design things on paper, we should still be able to realize both
gains in development efficiency AND quality when designing and
implementing modern systems to run on modern system architectures.

For documentation supporting these contentions, see the books "Rapid
Development--taming wild software schedules," and "Code Complete,"
both by Steve McConnell, as well as "Writing Solid Code," by Steve
Maguire.

The main reason people like myself complain about modern practices
isn't because we're nostalgic, but because modern economic realities
(which are great, we all agree), have the unfortunate side effect of
not encouraging careful design as much as prior economic realities
did.

Regards,
--Dan
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
I find a common method used by many young programmers nowadays is to
blast out some code as quickly as they can type, compile it and fix
errors until it compiles cleanly, and then start running it to see
if it works. There is little to no human validation of the
code or
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
logic.
Even more sadly, this modern approach seems to discourage
design, in
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
the sense that it is so cheap and easy to compile and run
programs
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
now that it requires a lot of discipline on the part of the
programmer to do designs on paper and put in design validation
effort.
I think you've just described "Extreme Programming."
Bah.
Kids these days.
The factor you may not have considered fully is the dramatic
reduction in
Post by John Alvord
cost of computing. Compared to (say) 1970, the $ cost of human
work has
Post by John Alvord
gone up maybe 4 times and the cost of computation has been reduced by
(say) 10,000. The obvious compensating strategy is to lean on the
computing side - assuming that minimizing costs is the goal. If you are
after some other - purity/nostalgic - goal, then cost isn't part of the
equation...
john alvord
Yahoo! Groups Sponsor
ADVERTISEMENT




Community email addresses:
Post message: hercules-390-***@public.gmane.org
Subscribe: hercules-390-subscribe-***@public.gmane.org
Unsubscribe: hercules-390-unsubscribe-***@public.gmane.org
List owner: hercules-390-owner-***@public.gmane.org

Files and archives at:
http://groups.yahoo.com/group/hercules-390

Get the latest version of Hercules from:
http://www.conmicro.cx/hercules

Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.



[Non-text portions of this message have been removed]


------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
S. Vetter
2002-10-28 05:24:18 UTC
Permalink
(biting my tongue to a great extent). You are probably correct in the way that Microsoft does things. But you left out one thing: The beta tester's have to
PAY for being a tester and to report back the bugs found. Sort of like saying "Hey! I want to be a crash test dummy. And I'll even pay to run the risk of being
killed or maimed for life!"

The military I heard also behaves like NASA, they don't use up to date software either.

Scott

----------
Post by tom balabanov
one should realize that most of the modern software companies don't have to go through all the exhaustive testing. This expense is passed on to the consumer.
Microsoft et al. are pushing code out ,getting the major bugs out and relying on alpha and beta sites to get the real bugs out,
that I think is why they don't spend as much time up front on the design, they don't have to pay for the debugging so why should they rigoriously design.
That is why NASA spends so much up front,and they don't use leading edge software
----- Original Message -----
From: marysmiling2002
Sent: Sunday, October 27, 2002 12:38 PM
Subject: [hercules-390] Re: Running a mainframe at home is a interesting IBM needs to listen
Hi John,
Not sure if I'm reading your post right.
Did you mean to say that it is less expensive to construct software
without designing it on paper first, or performing the other
validation steps in the classic process?
Or did you mean that the lower quality of modern software vs. older
software is less costly now than it used to be due to the fact that
computing resources are less expensive?
Because of the change in cost factors, it is now much cheaper to
throw hardware at a problem than software. Because of that, highly
efficient systems like the S/360 architecture and its descendants
are no longer essential for most problems. Designs that are a lot
less clever suffice most of the time. Due to this change, companies
that make software are unwilling to expend resources making it
highly efficient, scalable, or whatever. Why should they?
On the other hand, it's no less expensive for the software to fail
than before. A simple application running on a million desktop
machines still costs hugely in lost data or productivity if it fails
routinely, due to the fact that it is failing on a millon machines
instead of just one (though the cost of each single failure may be
much, much lower than the cost of a single failure in the mainframe
days).
One cost has actually gone up since the PC revolution: support. It's
much more costly to configure, maintain, and support thousands of
desktop boxes than one mainframe. One of the costs of buggy software
is support, and it can be very high due to the fact that it must
be "fixed" over and over again, in many different physical
locations.
Due to support costs, trends in system design now favor centralizing
critical functions in client/server configurations. Once you move to
a single server serving thousands of users, we are right back to the
need for reliability, scalability, and availability we used to have,
only there is much more hardware available more cheaply now, so we
still don't need the kind of efficient use of hardware resources we
once did. At any rate, the place of the mainframe in the modern
world is more often as a server than as a host (with a few well
known exceptions). I would contend that it is still one of the best
server systems in existence, though its high cost of both
procurement and operation mean its only suitable for the large
enterprise. No surprise that this is exactly how IBM is positioning
it these days.
It's worth raising the question of whether the way we write software
now is better than the way we wrote software back when computing was
much more expensive.
Many people (myself included) casually (unscientifically) believe
that it is actually less expensive to follow a cycle of design, then
validation, then lower level design, then validation, and so forth
before coding, and then validating the code on paper before testing
begins. This lifecycle model is called the "waterfall" model by
process experts, and, due in part to the fact that it is older, it
is not considered to be a "cutting edge" lifecycle model.
Nonetheless, these basic realities still exist: Correcting a problem
in a high level design is nearly always much less expensive than
correcting the same problem in a lower level design. Correcting a
problem in a design is nearly always much less expensive than
correcting the problem in an implementation of that design.
Carefully considering high level options before taking a certain
direction can dramatically decrease the amount of work needed to
solve a problem, since some high-level design directions are much
more expensive than others.
All of these realitites would suggest that it is usually cheaper to
use a traditional design process than it is to just start coding and
then go through many iterations until the software system ends up in
its final state.
With respect to qualtiy, as opposed to cost, these realities also
exist: Research suggests that black box testing will find about 40%
of the flaws in a software system, whereas reading the code will
find about 90% of them. Furthermore, the bugs found in black box
testing will tend to be different kinds of bugs than those found in
inspection, meaning that using both methods makes it easy to
approach 100% of the bugs in the system. Also, carefully considered
high level design decisions lead to simpler solutions. Understanding
the design thoroughly before coding leads to better organized
implementations (well designed module boundaries, etc.), which not
only lowers maintenance cost, but also results in systems that are
both easier to validate, and, generally, more valid from the start.
All of these realities would suggest that software designed using
traditional methods would usually have higher quality than software
done using "Extreme Software Engineering" type lifecycle models.
Antecdotal evidence supports these observations. Many "old timers"
can tell stories about how teams performed what, by today's
standards, would be considered extraordinary feats of software
engineering very quickly, and with very high quality. This is
partially due to the methodologies, and partially due to the fact
that those older systems (e.g. IBM mainframes) provided much more
enlightened system interfaces and programming support than newer
systems. The latter is, of course, due to economic factors. When you
can charge millions for each system sold, a lot more resources can
be expended when designing it.
Still, since we think it's actually both better AND cheaper to
design things on paper, we should still be able to realize both
gains in development efficiency AND quality when designing and
implementing modern systems to run on modern system architectures.
For documentation supporting these contentions, see the books "Rapid
Development--taming wild software schedules," and "Code Complete,"
both by Steve McConnell, as well as "Writing Solid Code," by Steve
Maguire.
The main reason people like myself complain about modern practices
isn't because we're nostalgic, but because modern economic realities
(which are great, we all agree), have the unfortunate side effect of
not encouraging careful design as much as prior economic realities
did.
Regards,
--Dan
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
I find a common method used by many young programmers nowadays
is to
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
blast out some code as quickly as they can type, compile it
and fix
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
errors until it compiles cleanly, and then start running it to
see
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
if it works. There is little to no human validation of the
code or
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
logic.
Even more sadly, this modern approach seems to discourage
design, in
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
the sense that it is so cheap and easy to compile and run
programs
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
now that it requires a lot of discipline on the part of the
programmer to do designs on paper and put in design validation
effort.
I think you've just described "Extreme Programming."
Bah.
Kids these days.
The factor you may not have considered fully is the dramatic
reduction in
Post by John Alvord
cost of computing. Compared to (say) 1970, the $ cost of human
work has
Post by John Alvord
gone up maybe 4 times and the cost of computation has been reduced
by
Post by John Alvord
(say) 10,000. The obvious compensating strategy is to lean on the
computing side - assuming that minimizing costs is the goal. If
you are
Post by John Alvord
after some other - purity/nostalgic - goal, then cost isn't part
of the
Post by John Alvord
equation...
john alvord
Yahoo! Groups Sponsor
ADVERTISEMENT
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.
[Non-text portions of this message have been removed]
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Ronald Tatum
2002-10-28 17:56:38 UTC
Permalink
NASA is different, apparently, depending on where you look. I worked on the
telemetry preprocessor for the Voyager project (I even have a framed
certificate saying I was awarded the Public Service Group Achievement Award
for Voyager Ground Data System Development and Operations) at JPL.

The machine was a highly modified Univac MTC (essentially, they sort of
chopped the MTC in two so they had a string of Univac 1530 uniprocessors)
they somehow acquired from the US Navy. The programming language (and I'm
using that term *very* loosely) was an assembler that ran on a Moccomp IV.
The assmebler was generated by a company in Pasadena (no, I'll not name it -
what I have to say is not a libel, but I do fear being sued for slander) and
the stupid thing couldn't generate all the instructions the MTC/1530 had. As
a result, there were *lots* of octal constants in the source language files.

As problems/bugs were found, we had to make fixes (duh) and test them
on either real spacecraft data or synthetic data streams. We'd load out
standalone system from tape (no, there wasn't what you'd call an OS with
apps runnning under it - the whole thing was one giant slobber of code) and
load our octal patch decks in. For "efficiency", each card held an address
for the three octal instructions (max) on that card. I found out that I
could put just *one* patch on a card and key in the assembler source after
it as sort of a comment. I was told in no uncertain terms that I was not to
do such a thing; so about once a month we collected all out damned patches
and hand-backtranslated the octal to assembler source (hopefully we hadn't
lost our minds yet) and reassembled a new version of the TPP. There were
three machine strings: one for each of the two Voyager spacecraft and the
third for a hot backup and program development/testing.

Productivity? Efficiency? Whuzzat? I don't know what my time was billed
at, but I was paid fairly well. As a contract programmer, I loved what
NASA/JPL was doing to me as a taxpayer ....Sort of like what a fairly good
computer scientist said to an IBM executive at a SHARE conference - "As a
shareholder, I love how you're screwing me as a customer." And since his
budget was a few tens of thousands per month, the IBM guy didn't do anything
except flash a really insincere smile and quickly move off to another group.

Regards,
Ron Tatum

BTW, I knew I was in deep stuff when I was working on a manufacturing
information system for Systems Manufacturing Division/Components/World Trade
Corporation and showed the senior CE at the site a message I got from either
OS/360 Rel 13 or Rel 15/16 that indicated I should raise an APAR with my IBM
CE and the old f..t asked me "What's an APAR?" Ah, yes, the cobbler's
children syndrome.
----- Original Message -----
From: "S. Vetter" <svetter-yWtbtysYrB+***@public.gmane.org>
To: <hercules-390-***@public.gmane.org>
Sent: Sunday, October 27, 2002 11:24 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a interesting
IBM needs to listen
Post by S. Vetter
(biting my tongue to a great extent). You are probably correct in the
way that Microsoft does things. But you left out one thing: The beta
tester's have to
Post by S. Vetter
PAY for being a tester and to report back the bugs found. Sort of like
saying "Hey! I want to be a crash test dummy. And I'll even pay to run the
risk of being
Post by S. Vetter
killed or maimed for life!"
The military I heard also behaves like NASA, they don't use up to date software either.
Scott
----------
Post by tom balabanov
one should realize that most of the modern software companies don't have
to go through all the exhaustive testing. This expense is passed on to the
consumer.
Post by S. Vetter
Post by tom balabanov
Microsoft et al. are pushing code out ,getting the major bugs out and
relying on alpha and beta sites to get the real bugs out,
Post by S. Vetter
Post by tom balabanov
that I think is why they don't spend as much time up front on the
design, they don't have to pay for the debugging so why should they
rigoriously design.
Post by S. Vetter
Post by tom balabanov
That is why NASA spends so much up front,and they don't use leading edge software
----- Original Message -----
From: marysmiling2002
Sent: Sunday, October 27, 2002 12:38 PM
Subject: [hercules-390] Re: Running a mainframe at home is a
interesting IBM needs to listen
Post by S. Vetter
Post by tom balabanov
Hi John,
Not sure if I'm reading your post right.
Did you mean to say that it is less expensive to construct software
without designing it on paper first, or performing the other
validation steps in the classic process?
Or did you mean that the lower quality of modern software vs. older
software is less costly now than it used to be due to the fact that
computing resources are less expensive?
Because of the change in cost factors, it is now much cheaper to
throw hardware at a problem than software. Because of that, highly
efficient systems like the S/360 architecture and its descendants
are no longer essential for most problems. Designs that are a lot
less clever suffice most of the time. Due to this change, companies
that make software are unwilling to expend resources making it
highly efficient, scalable, or whatever. Why should they?
On the other hand, it's no less expensive for the software to fail
than before. A simple application running on a million desktop
machines still costs hugely in lost data or productivity if it fails
routinely, due to the fact that it is failing on a millon machines
instead of just one (though the cost of each single failure may be
much, much lower than the cost of a single failure in the mainframe
days).
One cost has actually gone up since the PC revolution: support. It's
much more costly to configure, maintain, and support thousands of
desktop boxes than one mainframe. One of the costs of buggy software
is support, and it can be very high due to the fact that it must
be "fixed" over and over again, in many different physical
locations.
Due to support costs, trends in system design now favor centralizing
critical functions in client/server configurations. Once you move to
a single server serving thousands of users, we are right back to the
need for reliability, scalability, and availability we used to have,
only there is much more hardware available more cheaply now, so we
still don't need the kind of efficient use of hardware resources we
once did. At any rate, the place of the mainframe in the modern
world is more often as a server than as a host (with a few well
known exceptions). I would contend that it is still one of the best
server systems in existence, though its high cost of both
procurement and operation mean its only suitable for the large
enterprise. No surprise that this is exactly how IBM is positioning
it these days.
It's worth raising the question of whether the way we write software
now is better than the way we wrote software back when computing was
much more expensive.
Many people (myself included) casually (unscientifically) believe
that it is actually less expensive to follow a cycle of design, then
validation, then lower level design, then validation, and so forth
before coding, and then validating the code on paper before testing
begins. This lifecycle model is called the "waterfall" model by
process experts, and, due in part to the fact that it is older, it
is not considered to be a "cutting edge" lifecycle model.
Nonetheless, these basic realities still exist: Correcting a problem
in a high level design is nearly always much less expensive than
correcting the same problem in a lower level design. Correcting a
problem in a design is nearly always much less expensive than
correcting the problem in an implementation of that design.
Carefully considering high level options before taking a certain
direction can dramatically decrease the amount of work needed to
solve a problem, since some high-level design directions are much
more expensive than others.
All of these realitites would suggest that it is usually cheaper to
use a traditional design process than it is to just start coding and
then go through many iterations until the software system ends up in
its final state.
With respect to qualtiy, as opposed to cost, these realities also
exist: Research suggests that black box testing will find about 40%
of the flaws in a software system, whereas reading the code will
find about 90% of them. Furthermore, the bugs found in black box
testing will tend to be different kinds of bugs than those found in
inspection, meaning that using both methods makes it easy to
approach 100% of the bugs in the system. Also, carefully considered
high level design decisions lead to simpler solutions. Understanding
the design thoroughly before coding leads to better organized
implementations (well designed module boundaries, etc.), which not
only lowers maintenance cost, but also results in systems that are
both easier to validate, and, generally, more valid from the start.
All of these realities would suggest that software designed using
traditional methods would usually have higher quality than software
done using "Extreme Software Engineering" type lifecycle models.
Antecdotal evidence supports these observations. Many "old timers"
can tell stories about how teams performed what, by today's
standards, would be considered extraordinary feats of software
engineering very quickly, and with very high quality. This is
partially due to the methodologies, and partially due to the fact
that those older systems (e.g. IBM mainframes) provided much more
enlightened system interfaces and programming support than newer
systems. The latter is, of course, due to economic factors. When you
can charge millions for each system sold, a lot more resources can
be expended when designing it.
Still, since we think it's actually both better AND cheaper to
design things on paper, we should still be able to realize both
gains in development efficiency AND quality when designing and
implementing modern systems to run on modern system architectures.
For documentation supporting these contentions, see the books "Rapid
Development--taming wild software schedules," and "Code Complete,"
both by Steve McConnell, as well as "Writing Solid Code," by Steve
Maguire.
The main reason people like myself complain about modern practices
isn't because we're nostalgic, but because modern economic realities
(which are great, we all agree), have the unfortunate side effect of
not encouraging careful design as much as prior economic realities
did.
Regards,
--Dan
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
I find a common method used by many young programmers nowadays
is to
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
blast out some code as quickly as they can type, compile it
and fix
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
errors until it compiles cleanly, and then start running it to
see
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
if it works. There is little to no human validation of the
code or
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
logic.
Even more sadly, this modern approach seems to discourage
design, in
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
the sense that it is so cheap and easy to compile and run
programs
Post by John Alvord
Post by Adam Thornton
Post by marysmiling2002
now that it requires a lot of discipline on the part of the
programmer to do designs on paper and put in design validation
effort.
I think you've just described "Extreme Programming."
Bah.
Kids these days.
The factor you may not have considered fully is the dramatic
reduction in
Post by John Alvord
cost of computing. Compared to (say) 1970, the $ cost of human
work has
Post by John Alvord
gone up maybe 4 times and the cost of computation has been reduced
by
Post by John Alvord
(say) 10,000. The obvious compensating strategy is to lean on the
computing side - assuming that minimizing costs is the goal. If
you are
Post by John Alvord
after some other - purity/nostalgic - goal, then cost isn't part
of the
Post by John Alvord
equation...
john alvord
Yahoo! Groups Sponsor
ADVERTISEMENT
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.
[Non-text portions of this message have been removed]
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
Post by S. Vetter
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
b***@public.gmane.org
2002-10-27 11:18:35 UTC
Permalink
YAH!

When I was a kid, I had to walk to and from school. 20 miles each way.
Uphill in both directions.

During the winter, we'd always have 5 foot snowfalls and we still had to
walk. I hear kids today complain about not having good boots. Fooey. Back
then we didn't even have feet.

Darn kids.

----- Original Message -----
From: "Adam Thornton" <adam-uX/***@public.gmane.org>
To: <hercules-390-***@public.gmane.org>
Sent: Saturday, October 26, 2002 8:55 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a interesting
IBM needs to listen
Post by Adam Thornton
Post by marysmiling2002
I find a common method used by many young programmers nowadays is to
blast out some code as quickly as they can type, compile it and fix
errors until it compiles cleanly, and then start running it to see
if it works. There is little to no human validation of the code or
logic.
Even more sadly, this modern approach seems to discourage design, in
the sense that it is so cheap and easy to compile and run programs
now that it requires a lot of discipline on the part of the
programmer to do designs on paper and put in design validation
effort.
I think you've just described "Extreme Programming."
Bah.
Kids these days.
Adam
--
"My eyes say their prayers to her / Sailors ring her bell / Like a moth
mistakes a light bulb / For the moon and goes to hell." -- Tom Waits
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
rocral2
2002-11-12 14:47:46 UTC
Permalink
(sorry, this is the answer to a off-topic question)
Post by Andy Kane
Afterthought 2: Does anyone have - or know where to find - a copy of
McCracken and Dorn's book "Numerical Methods in Fortran Programming"
1964 edition? I have a personal reason for wanting a copy, or if that
isn't possible, a copy of the Acknowledgements page.
I have thinking about this title and finally I remembered this was a
complementary book when I started learning FORTRAN IV in 1973 on a
FACOM 25 (FUJITSU), when I was studying industrial engineering in
Barcelona.

finally I just found this book also the spanish version of this book
from 1966 on library internet catalog of my old university.

See it at:

(original english version - 1 book)
http://leslu.upc.es/cgi-bin/vtls.web.gateway?autho
rity=0705-42480&conf=080000++++++++++++++

(1966 spanish version - 6 books)
http://leslu.upc.es/cgi-bin/vtls.web.gateway?autho
rity=0353-00080&conf=080000++++++++++++++

Please, paste broken URL before browsing in the net. Don't worry if
you know something of spanish language and don't understand anything,
this library web is built on catalan.

Let me know if you want some information, photocopies, scanned pages
or similar.

Alex Garcia
BARCELONA

mailto:rocral2-***@public.gmane.org



------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Andy Kane
2002-11-12 20:49:06 UTC
Permalink
Hola Alex,

Muchas gracias. Yes I recognized the language as Catalan, and since I
do have a fair reading ability in Spanish, I could understand the web
pages perfectly.

I'm sending email off-list with a small request.

Thank you for your trouble (and your great memory).

Shalom from Tel Aviv. Andy.
--- snips ---
Post by rocral2
Please, paste broken URL before browsing in the net. Don't worry if
you know something of spanish language and don't understand anything,
this library web is built on catalan.
Let me know if you want some information, photocopies, scanned pages
or similar.
Alex Garcia
BARCELONA
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/duQHAA/W4wwlB/TM
---------------------------------------------------------------------~->
gregnunan
2002-11-13 13:58:22 UTC
Permalink
Andy,
Post by Andy Kane
Afterthought 2: Does anyone have - or know where to find - a copy of
McCracken and Dorn's book "Numerical Methods in Fortran Programming"
1964 edition? I have a personal reason for wanting a copy, or if that
isn't possible, a copy of the Acknowledgements page.
Take a look at Alibris (http://www.alibris.com/) - there are 23
copies of the book with 16 being the 1964 edition!

HTH,

Greg.



------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/duQHAA/W4wwlB/TM
---------------------------------------------------------------------~->
Andy Kane
2002-11-14 12:00:56 UTC
Permalink
Greg,

Ohmyohmyohmyohmy...

I hadn't heard of this site before.
What a wonderful source of great material, and, I predict, a wonderful
drain for disposable income!

Thanks very much.
Shalom. Andy


--- In hercules-390-F5Bj5G+***@public.gmane.org, "gregnunan" <***@n...> wrote:
--- snip ---
Post by gregnunan
Take a look at Alibris (http://www.alibris.com/) - there are 23
copies of the book with 16 being the 1964 edition!
HTH,
Greg.
b***@public.gmane.org
2002-10-27 10:52:40 UTC
Permalink
Peter,

My first computer was also the 1620, but it was removed from the school in
favor of an IBM 1130, which was the first computer I got to know really
well. Take a look at www.ibm1130.org. They have an emulator. Very cool.

-- Bob
----- Original Message -----
From: "Peter J Farley III" <pjfarley3-/***@public.gmane.org>
To: <hercules-390-***@public.gmane.org>
Sent: Friday, October 25, 2002 1:58 AM
Subject: [hercules-390] Re: Running a mainframe at home is a interesting IBM
needs to listen
Post by Peter J Farley III
Who here is closest to my age? Bomber, that was a well written
message, (I can use your nickname here?).
Well, I'm not near you, but I am before you (1950). ;-)
Peter
P.S. -- If it matters, my first CPU was an IBM 1620 (2nd generation
relays-n-transistors HW, variable-length decimal operations, BCD
(6-bit) character set). It also had the first disk drive I ever saw,
an IBM 1311 (specs unknown). The horde of relay clicks when it was
"computing" was pretty noisy, as I remember.
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Jeffrey R. Broido
2002-10-25 11:03:24 UTC
Permalink
Greg,

Yes, you can call me Bomber! People have been calling me that since
1955 when I was eight and, inspired by Don Herbert ("Mr. Wizard"), I
inadvertently exploded a milk bottle full of alcohol with my mother,
so I guess that makes me older than almost anyone else here. Thanks
for the compliment! I don't always write well, but passion seems to
help, and that guy's post certainly did fire me up. As for
Scrabble, my wife and I are fanatics and play roughly six games a
day using two PCs side-by-side. If you or anyone else here would
like to play with us in real time over the net, just say the word!
My ICQ # is 7129800, my AIM95 screen name is thebomber and my Yahoo
messenger name is broidoj.

Bomber

--- In hercules-390-F5Bj5G+***@public.gmane.org, "Gregg C Levine" wrote:

<snip>
Wow. Who here is closest to my age? Bomber, that was a well
written message, (I can use your nickname here?). This technology
is, well, peculiar, and rightly so, it was designed early on. And
the software is also peculiar, the early OSes, with the exception
of Linux/390 is all we have to work with. Jay, the word bozo is a
twenty point score, if you're playing Scrabble that is.
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
b***@public.gmane.org
2002-10-27 10:47:05 UTC
Permalink
Greg,

I was born in '56. I went to a college in NJ was connect by RJE to systems
at Rutgers where Jeff Broido (a.k.a. Bomber) was a system programmer.

I have very fond memories of those days. OS/MVT 21.8 was the king of the
world. Jeff was the black knight who stood on the bridge declaring, "None
shall pass." Occassionally, we'd sneak around him. (Remember FLARP?)

-- Bob Flanders

----- Original Message -----
From: "Gregg C Levine" <hansolofalcon-XfrvlLN1Pqtfpb/***@public.gmane.org>
To: <hercules-390-***@public.gmane.org>
Sent: Thursday, October 24, 2002 9:12 PM
Subject: RE: [hercules-390] Re: Running a mainframe at home is a interesting
IBM needs to listen


Hello from Gregg C Levine
Excuse me? I was born in 1962. And when I got involved with computers,
it happens, that the big thing on campus, was probably OS/360. It is
more capable then MVS, now, and even then, when MVS was evolving out of
probably MFT. I strongly suggest you tone down your postings, especially
since a lot of us, are also using Hercules to try out, different
solutions for Linux/390. Adam? You were born in '71? Wow...... Who here
is closest to my age? Bomber, that was a well written message, (I can
use your nickname here?). This technology is, well, peculiar, and
rightly so, it was designed early on. And the software is also peculiar,
the early OSes, with the exception of Linux/390 is all we have to work
with. Jay, the word bozo is a twenty point score, if you're playing
scrabble that is.
-------------------
Gregg C Levine hansolofalcon-XfrvlLN1Pqtfpb/***@public.gmane.org
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke." Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Thursday, October 24, 2002 12:57 PM
Subject: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
Post by Hugo Drax
I never had the opportunity to touch a mainframe (born in 71) I live
in the world of TCP/IP,Servers,routers heh heh heh
Frankly, if you were born in 71 and consider mvs to be anything other
than complete crap compared to unix, then one has to question if you
know either very well.
------------------------ Yahoo! Groups Sponsor
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
Jeffrey R. Broido
2002-10-27 16:55:01 UTC
Permalink
Bob,

Flarp? As I recall, that was one of the early passwords I made-up
for CALL/OS when we first wrote the mod allowing anyone to logon as
the administrator rather than dedicating a terminal in the machine
room. We finally settled upon a rotation of three more complex
passwords, none of which I can reveal for reasons I'm ashamed to
admit. I think Flarp (one of several words, including poop and
frak, that I used interchangably when I couldn't think something
more appropriate) achieved the status of myth, at least at Rutgers,
following its wide dissemenation by the hacker who discovered it.
Subsequently, we XOR'd the new passwords with X'DEADFACEDEADFACE'
which, believe it or not, stopped them cold.

I also remember David Slater who, in 1973, was a Rutgers U.
graduate student and teaching assistant. He taught an undergraduate
course on operating system design and assigned a class project, the
object of which was to get into supervisor state and/or key 0 and
crash the system. The project was successful and we crashed more
than once due to his shenanigans. This was not our sandbox system,
mind you. We had no such thing; we had but one system, and, with
just under one (1) MIPS, it supported all of Rutgers and 30 plus
other instutitions of higher education, typically with six batch
initiators and close to 200 timesharing users on CALL/OS, APL/XM6
and ATS. Slater had discovered that one could IDENTIFY one's own in-
line code as an OPEN appendage, among others, and it would be
invoked in supervisor state. Shortly thereafter, we sent our S/360
Model 67 to RPI in Troy, NY and replaced it and OS/360 MVT R21.8
with an S/370 Model 158 and VS2 R1 (SVS), which had considerably
more robust state switching security.

But Black Knight? I couldn't be the Black Knight for I have all of
my limbs intact, assuming you discount the bursitis in my right
shoulder or the ruined cruciate ligament in my left knee, ripped
through when my left foot got caught in the spokes of my motorcycle
wheel on the Rutgers campus in Piscataway, also in 1973. If you
visit The Hill Center For Mathematical Whatzis on that campus and
examine the men's room off the Grand Lobby in the cellar you'll
find, scratched into the stall dividers, under at least a dozen
layers of paint, "Broido is a Klingon" in many different hands.
When he was a High School student at Rutgers Prep, Chris Darrel
ported a Star Trek game to CALL/OS Fortran and we put it in the **
library, sort-of CALL/OS's link list, which meant that anyone could
run it (RUN **KIRK) but not list or save it. Due to state politics
too involved and boring to describe here, most of our non-Rutgers
users were moved to the Princeton U. data center for a short time.
During that period, the Princeton student hackers, who were
apparently much more competent than the Rutgers student hackers,
broke CALL/OS's feeble security and, when the missing load moved
back to Rutgers and the CALL/OS databases were merged, there were
already hundreds of copies of the game, most with scripts modified
to add puerile, little touches ("Beam me the #&$% up, Scotty,"
etc.). My boss at the time, Julian Wachs, offhandedly assigned me
the task of getting rid of these for, in a matter of a couple of
months, we found that more than half of the seven 3330-1 disk packs
occupied by timesharing user data were taken up with copies of the
game. I wrote code to do this and ran it weekly as some kids had
theirs backed-up on paper tape and would persistently reload them.
Hence, I became a Klingon, but there was no vehemence attached. It
was all just part of the game.

And we could be awfully naive. Take, for example, the Freshman
whose name escapes me, who carried all of his belongings (mostly
listings on green bar paper) in a ratty, powder blue Grasshopper™
carry-on bag and told everyone who would listen that his IQ was so
high it was unmeasurable. He had taken to hanging out in the
SysProg office and would camp out in front of one of our microfiche
machines, studying system code. Eventually, of course, and on a
regular basis, he would have his fun and bring the system down. Our
management tried unsuccessfully to get him expelled and did revoke
his computer priveleges, upon which he transferred out to another
school.

Then there was the time that an earlier boss, Dave Kanter, left a
job to be run with the I/O clerks who, after they fed it through the
2540, stuck it in his output bin. This was counter to his own
policy, for our job cards contained a password of sorts in the
accounting field which was used by a MOD to IEHPROGM to prevent such
things as SCRATCH VTOC. Of course, the deck was found (we were
never certain, but signs pointed again to Mr. Slater) and, one fine
day, the contents of all of our packs mounted STORAGE disappeared.
As luck would have it, Dave Cole, who subsequently wrote XDC, the
best debugger the world has ever known (http://www.colesoft.com),
had just been working on a program he called CLEANUP which would
clean the storage volumes of old or improper files. As a result, he
had IEHLISTs of all the volumes, taken just an hour or so before the
VTOCs were scratched. We shut down the data center and the two of
us sat in front of a 2260 console, one reading DSCBs in hex and the
other typing them back using IMASPZAP. It took all night, but we
only lost a couple of datasets. You simply can't imagine the
overwhelming sense of achievement. We only made back-ups once a
week and they were already almost a week old, so this was really our
only solution.

Not all of our disasters had such neat solutions, but even they make
for fond memories like yours and I have said often that I would give
almost anything to go back to those heady, late pioneering days and
do it all again, including the 90 hour weeks and coming to work at
3:00 AM to apply PTFs from little, yellow card decks with pink cover
cards... That is, I'd go if I could take my wife and three cats
with me!

Regards,
Bomber
I was born in '56. I went to a college in NJ which was connected
by RJE to systems at Rutgers where Jeff Broido (a.k.a. Bomber)
was a system programmer. I have very fond memories of those days.
OS/MVT 21.8 was the king of the world. Jeff was the black knight
who stood on the bridge declaring, "None shall pass."
Occassionally, we'd sneak around him. (Remember FLARP?)
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Sam Knutson
2002-10-27 15:03:00 UTC
Permalink
Hi,
Post by b***@public.gmane.org
(Remember FLARP?)
At the risk of encouraging a rambling OT thread... I found this for flarp

flarp
/flarp/ [Rutgers University] Yet another metasyntactic variable (see foo).
Among those who use it, it is associated with a legend that any program not
containing the word "flarp" somewhere will not work. The legend is
discreetly silent on the reliability of programs which *do* contain the
magic word.

Care to share your FLARP story?


Best Regards,

Sam Knutson
mailto:sam-***@public.gmane.org
My Home Page http://www.knutson.org
CBT Tape http://www.cbttape.org



------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
b***@public.gmane.org
2002-10-27 15:52:20 UTC
Permalink
Sam,

Funny... When I was in school, I was associated with another fellow who had
discovered the password of a system programmer. The password was FLARP. (I
don't know whose it was .. Jeff??) Further, I don't remember (or think I
ever knew) if it was a TS account or a batch account, but this password
opened a door to a wide variety of knowledge. And, of course, as any
respectable hacker from those days would tell you, knowledge belongs in the
hands of the people. Or, at least, knowledge belongs in the hands of a
hacker.

Behind the door laid riches not previously known. Source code to routines
showing how the passwords were hashed. Code to routines that demonstrated
how to get into supervisor state. (As I remember, a function was added to an
existing SVC or a new SVC was added that, if a certain register contained a
certain value, it returned to your problem program in supervisor state.)
Accessibility to offline packs that contained a wealth of information. I
know we had the code to the 1130 RJE Terminal and the 1130 operating system.

Personally, I used variants of FLARP as a passwords (FLARP123, FLARPRULES,
FLARPLIVES, FLARPME) for two decades. I even have a hat a friend had made
for me that has the word FLARP on the front. Then I learned some Russian,
and found they have a lot of neat words to used as password, and rarely use
FLARP anymore.

-- Bob

----- Original Message -----
From: "Sam Knutson" <sam-***@public.gmane.org>
To: <hercules-390-***@public.gmane.org>
Sent: Sunday, October 27, 2002 4:03 PM
Subject: [hercules-390] Re: Running a mainframe at home is a interesting IBM
needs to listen
Post by Sam Knutson
Hi,
Post by b***@public.gmane.org
(Remember FLARP?)
At the risk of encouraging a rambling OT thread... I found this for flarp
flarp
/flarp/ [Rutgers University] Yet another metasyntactic variable (see foo).
Among those who use it, it is associated with a legend that any program not
containing the word "flarp" somewhere will not work. The legend is
discreetly silent on the reliability of programs which *do* contain the
magic word.
Care to share your FLARP story?
Best Regards,
Sam Knutson
My Home Page http://www.knutson.org
CBT Tape http://www.cbttape.org
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
zenith89
2002-10-25 03:00:00 UTC
Permalink
Jim, I think you came on a little too strong here. As you know
personally I'm not a big fan of the other system, but I don't think
its fair to call it "crap" -- because everything works! And works
very well. I found that once you got something working it always
worked, unlike some systems we could all name. I think you are
confusing one part of the user interface, JCL, with the whole
operating system, assuming you are not merely trolling.

MVS was (and is) pretty good compared to some of it peers. I recall
reading a book on Unix where the author had a test: can a systems
Fortran compiler compile a file that was generated by a Fortran
program? He said some systems failed that test, but didn't mention
which ones; I heard from somebody later that Burroughs operating
systems failed this test. Can you imagine that? If true, now *that* is
"crap".

...CPV
Frankly,(snip)
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Psychedelic Harry
2002-10-25 03:40:12 UTC
Permalink
Post by zenith89
Jim, I think you came on a little too strong here. As you know
personally I'm not a big fan of the other system, but I don't think
its fair to call it "crap" -- because everything works! And works
very well. I found that once you got something working it always
worked, unlike some systems we could all name. I think you are
confusing one part of the user interface, JCL, with the whole
operating system, assuming you are not merely trolling.
MVS was (and is) pretty good compared to some of it peers. I recall
reading a book on Unix where the author had a test: can a systems
Fortran compiler compile a file that was generated by a Fortran
program? He said some systems failed that test, but didn't mention
which ones; I heard from somebody later that Burroughs operating
systems failed this test. Can you imagine that? If true, now *that* is
"crap".
The Fortran program generating the failing Fortran source might be
called crap, but how does that reflect on the operating system?

psychedelic-harry-9q/xBM6aKHVWk0Htik3J/***@public.gmane.org

...will scab for food...


------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
zenith89
2002-10-25 13:40:30 UTC
Permalink
The problem was that the operating system was so strange that there
was no way a Fortran compiler could read a file that had been produced
by a Fortran program. I heard that some of Burrough's OSs were bad for
this, for example the output from a Fortran program was a "Fortran"
file and the output from an Algol program was an "Algol" file; a
Fortran program could not read an "Algol" file.

...CPV

Disclaimer: I never had the pleasure of using a Burroughs. I hope I'm
not libeling Burroughs here, other than the above peculiarity I heard
they were pretty good. IMHO like most of BUNCH they deserved better
than total oblivion.
Post by Psychedelic Harry
The Fortran program generating the failing Fortran source might be
called crap, but how does that reflect on the operating system?
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
S. Vetter
2002-10-25 14:11:31 UTC
Permalink
Having worked on a Burrough's 3700, 4700, and 6700 I can say that the OS
was pretty good as never having seen one crash. As a matter of fact I
haven't seen one of their card readers require so much maintenance as the
IBM's version. Having used only COBOL in a Burrough's environment was not to
bad with only a few core dumps as opposed to IBM where it seemed like every
time I turned around I was getting a core report.

One thing strange though using a Burrough's RPG compiler, it translated it
into COBOL and then ran the COBOL compiler. And now that I think of it
(funny story time) that on the 3700 they had a switch that would make it into
a 4700. When the CE came in he switched it to the 4700 mode, he then did the
work and then switched it back when done. When he was safely gone, the
operators switched back to the 4700 mode. As a matter of fact, the 6700 had
numerous lights on it like the IBM 360's, 370/168, and so on. And it seemed
like the 6700 had two or three times more, but when the 6700 was idle, the
lights flashed to show the Burrough's logo.

But we are getting away from the topic of this message area.

Scott
Post by zenith89
The problem was that the operating system was so strange that there
was no way a Fortran compiler could read a file that had been produced
by a Fortran program. I heard that some of Burrough's OSs were bad for
this, for example the output from a Fortran program was a "Fortran"
file and the output from an Algol program was an "Algol" file; a
Fortran program could not read an "Algol" file.
...CPV
Disclaimer: I never had the pleasure of using a Burroughs. I hope I'm
not libeling Burroughs here, other than the above peculiarity I heard
they were pretty good. IMHO like most of BUNCH they deserved better
than total oblivion.
Post by Psychedelic Harry
The Fortran program generating the failing Fortran source might be
called crap, but how does that reflect on the operating system?
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
4 DVDs Free +s&p Join Now
http://us.click.yahoo.com/pt6YBB/NXiEAA/jd3IAA/W4wwlB/TM
---------------------------------------------------------------------~->
Ronald Tatum
2002-10-25 19:15:12 UTC
Permalink
Folks,
Of course what Fortran could/couldn't do doesn't have much to do with the
OS.
There is/was one *very* annoying thing about Fortran I/O:
If one punched out a bunch of floating point data in E-format (d.dddE+ee,
or some such nonsense), the "+" was suppressed, i.e., there would be a blank
column in the card. If one was printing something of the same sort, the +
was also turned into a blank, and of course if there was a "-", it was
printed. Fine for the human reading the stuff.

BUT if you fed a punched deck back into another Fortran program, it
didn't work so well - the blank was by no means the same as a "+" for the
Fortran input routines. So Fortran could not read back some things it
wrote...annoying.
----- Original Message -----
From: "zenith89" <zenith89-***@public.gmane.org>
To: <hercules-390-***@public.gmane.org>
Sent: Thursday, October 24, 2002 10:00 PM
Subject: [hercules-390] Re: Running a mainframe at home is a interesting IBM
needs to listen
Post by zenith89
Jim, I think you came on a little too strong here. As you know
personally I'm not a big fan of the other system, but I don't think
its fair to call it "crap" -- because everything works! And works
very well. I found that once you got something working it always
worked, unlike some systems we could all name. I think you are
confusing one part of the user interface, JCL, with the whole
operating system, assuming you are not merely trolling.
MVS was (and is) pretty good compared to some of it peers. I recall
reading a book on Unix where the author had a test: can a systems
Fortran compiler compile a file that was generated by a Fortran
program? He said some systems failed that test, but didn't mention
which ones; I heard from somebody later that Burroughs operating
systems failed this test. Can you imagine that? If true, now *that* is
"crap".
...CPV
Frankly,(snip)
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
b***@public.gmane.org
2002-10-27 11:11:32 UTC
Permalink
Ron..

Your message reminded me of an interesting anecdote.

At the high school I went to, before they had an 1130, that had a bunch of
Unit Record equipment including a 403 accounting machine.
Among other things, they used it to print report cards.

The funny thing was, the 403 didn't have a + character. So IBM replaced the
Q in 4 key positions with a + slug, allowing the school to print A+, B+ etc.

-- Bob

Here's a picture of the 407, very similar to the earlier 403.

http://www.columbia.edu/acis/history/407.html


----- Original Message -----
From: "Ronald Tatum" <rhtatum-***@public.gmane.org>
To: <hercules-390-***@public.gmane.org>
Sent: Friday, October 25, 2002 8:15 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a interesting
IBM needs to listen
Post by Ronald Tatum
Folks,
Of course what Fortran could/couldn't do doesn't have much to do with the
OS.
If one punched out a bunch of floating point data in E-format (d.dddE+ee,
or some such nonsense), the "+" was suppressed, i.e., there would be a blank
column in the card. If one was printing something of the same sort, the +
was also turned into a blank, and of course if there was a "-", it was
printed. Fine for the human reading the stuff.
BUT if you fed a punched deck back into another Fortran program, it
didn't work so well - the blank was by no means the same as a "+" for the
Fortran input routines. So Fortran could not read back some things it
wrote...annoying.
----- Original Message -----
Sent: Thursday, October 24, 2002 10:00 PM
Subject: [hercules-390] Re: Running a mainframe at home is a interesting IBM
needs to listen
Post by zenith89
Jim, I think you came on a little too strong here. As you know
personally I'm not a big fan of the other system, but I don't think
its fair to call it "crap" -- because everything works! And works
very well. I found that once you got something working it always
worked, unlike some systems we could all name. I think you are
confusing one part of the user interface, JCL, with the whole
operating system, assuming you are not merely trolling.
MVS was (and is) pretty good compared to some of it peers. I recall
reading a book on Unix where the author had a test: can a systems
Fortran compiler compile a file that was generated by a Fortran
program? He said some systems failed that test, but didn't mention
which ones; I heard from somebody later that Burroughs operating
systems failed this test. Can you imagine that? If true, now *that* is
"crap".
...CPV
Frankly,(snip)
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
Post by Ronald Tatum
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
marysmiling2002
2002-10-22 23:55:56 UTC
Permalink
I just started going through the procedure to set up TSO and couldn't
get the first job to work. Upon examining the system logs and printed
output, it looks like something's wrong with the writers. The system
log has:

000000 8000 IEE101A READY
000000 0000 IEE103I S WTR,00E *
000000 0000 IEE103I S RDR,00A *
000000 0000 IEE103I S INIT *

...
...

IEF450I WTR .00E . ABEND S2F3 TIME=23.35.42
IEF421I INIT=WTR.00E (2) NO RESTART
233542 4000 IEF421I INIT=WTR.00E (2) NO RESTART
IEF450I RDR .00A . ABEND S2F3 TIME=23.35.42
IEF421I INIT=RDR.00A (2) NO RESTART
233542 4000 IEF421I INIT=RDR.00A (2) NO RESTART
IEF450I INIT .INIT . ABEND S2F3 TIME=23.35.42
IEF421I INIT=INIT.INIT (2) NO RESTART
233542 4000 IEF421I INIT=INIT.INIT (2) NO RESTART
IEF450I WTR .00D . ABEND S2F3 TIME=23.35.42
IEF421I INIT=WTR.00D (2) NO RESTART
233542 4000 IEF421I INIT=WTR.00D (2) NO RESTART
IEF249I FOLLOWING P/R AND RSV VOLUMES ARE MOUNTED
SYSRES ON 150 (RSV-STR)
WORK01 ON 151 (RSV-PUB)
MVTRES ON 350 (P/R-PRV)
DLIB01 ON 351 (RSV-PRV)
WORK02 ON 352 (RSV-PUB)
233542 0000 S WTR,00E *
233542 0000 S RDR,00A *
233542 0000 S INIT *

...
...

IEF429I INITIATOR 'INIT' WAITING FOR WORK

When I try to run TCAMSTG1.JCL, I get output like this:

^M^L//TCAMSTG1 JOB CLASS=A,MSGCLASS=A,MSGLEVEL=(1,1)
//TCAMSTG1 EXEC ASMFC,PARM.ASM='DECK',REGION.ASM=512K
XXASM EXEC
PGM=IEUASM,REGION=50K 00020018
XXSYSLIB DD DSNAME=SYS1.MACLIB,DISP=SHR
IEUD 00040016
//ASM.SYSUT1 DD UNIT=SYSDA
X/SYSUT1 DD DSNAME=&SYSUT1,UNIT=SYSSQ,SPACE=(1700,
(400,50)), X00050018
XX SEP=
(SYSLIB) 00060018
//ASM.SYSUT2 DD UNIT=SYSDA
X/SYSUT2 DD DSNAME=&SYSUT2,UNIT=SYSSQ,SPACE=(1700,
(400,50)) 00070018
//ASM.SYSUT3 DD UNIT=SYSDA
X/SYSUT3 DD DSNAME=&SYSUT3,SPACE=(1700,
(400,50)), X00080018
XX UNIT=(SYSSQ,SEP=
(SYSUT2,SYSUT1,SYSLIB)) 00090018
XXSYSPRINT DD
SYSOUT=A 00140000
XXSYSPUNCH DD
SYSOUT=B 40140018
...
...
...
IEF236I ALLOC. FOR TCAMSTG1 ASM TCAMSTG1
IEF237I 350 ALLOCATED TO SYSLIB
IEF237I 151 ALLOCATED TO SYSUT1
IEF237I 352 ALLOCATED TO SYSUT2
IEF237I 150 ALLOCATED TO SYSUT3
IEF237I 352 ALLOCATED TO SYSPRINT
IEF237I 151 ALLOCATED TO SYSPUNCH
IEF237I 150 ALLOCATED TO SYSIN
COMPLETION CODE - SYSTEM=200 USER=0000


It seems odd to me that unit 150 is allocated to SYSIN and 151 to
SYSPUNCH. I expected 00D to SYSPUNCH, and SYSIN to be a stream in the
JCL deck. Could this be due to spooling?

Upon further examination, there is output like this in the printer
file:

^L//WTR JOB MSGLEVEL=1
//STARTING EXEC WTR
XXIEFPROC EXEC PGM=IEFSD080,PARM='PA',REGION=20K,ROLL=
(NO,NO) 10000020
//IEFPROC.IEFRDER DD UNIT=00E
X/IEFRDER DD UNIT=1403,DCB=
(BUFL=133,BUFNO=2,LRECL=133,RECFM=FM, C20000014
XX BLKSIZE=133),DSNAME=SYSOUT,DISP=
(NEW,KEEP), X30000014
XX VOLUME=
(,,,35) 40000014
COMPLETION CODE - SYSTEM=2F3 USER=0000
IEF242I ALLOC. FOR WTR 00E AT ABEND
IEF237I 00E ALLOCATED TO IEFRDER
...
...
...
^M^L//RDR JOB MSGLEVEL=1
//STARTING EXEC RDR
XXIEFPROC EXEC PGM=IEFIRC, READER FIRST
LOAD ,03000015
...
...
...
COMPLETION CODE - SYSTEM=2F3 USER=0000
IEF242I ALLOC. FOR RDR 00A AT ABEND
IEF237I 00A ALLOCATED TO IEFRDER
IEF237I 350 ALLOCATED TO IEFPDSI
IEF237I 150 ALLOCATED TO IEFDATA
...
...
...

Now, I am a total newbie at this, but I did make a best effort at
trying to find the problem. From what I can see, it almost looks as
if there is some problem with how readers and writers are started,
and subsequently allocations to the TCAMSTG1 job. Whether or not
that's the problem, I am sort of at an impasse here, since I don't
know how to proceed. Can anyone help?

Thanks in advance,
--Dan



------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Gregg C Levine
2002-10-28 18:04:52 UTC
Permalink
Hello again from Gregg C Levine
**Grins at his collection of things from that area of work.** Ron, you
just said what I wanted to say. Good!!!! But do you know what the
Galileo spacecraft is using for its systems? Or the Space Shuttle? Would
anyone be surprised if I told folk what's really running the Space
Shuttle?
-------------------
Gregg C Levine hansolofalcon-XfrvlLN1Pqtfpb/***@public.gmane.org
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke."  Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Monday, October 28, 2002 12:57 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
NASA is different, apparently, depending on where you look. I worked
on the
telemetry preprocessor for the Voyager project (I even have a framed
certificate saying I was awarded the Public Service Group Achievement
Award
for Voyager Ground Data System Development and Operations) at JPL.
The machine was a highly modified Univac MTC (essentially, they
sort of
chopped the MTC in two so they had a string of Univac 1530
uniprocessors)
they somehow acquired from the US Navy. The programming language (and
I'm
using that term *very* loosely) was an assembler that ran on a Moccomp
IV.
The assmebler was generated by a company in Pasadena (no, I'll not
name it -
what I have to say is not a libel, but I do fear being sued for
slander) and
the stupid thing couldn't generate all the instructions the MTC/1530
had. As
a result, there were *lots* of octal constants in the source language
files.
As problems/bugs were found, we had to make fixes (duh) and test
them
on either real spacecraft data or synthetic data streams. We'd load
out
standalone system from tape (no, there wasn't what you'd call an OS
with
apps runnning under it - the whole thing was one giant slobber of
code) and
load our octal patch decks in. For "efficiency", each card held an
address
for the three octal instructions (max) on that card. I found out that
I
could put just *one* patch on a card and key in the assembler source
after
it as sort of a comment. I was told in no uncertain terms that I was
not to
do such a thing; so about once a month we collected all out damned
patches
and hand-backtranslated the octal to assembler source (hopefully we
hadn't
lost our minds yet) and reassembled a new version of the TPP. There
were
three machine strings: one for each of the two Voyager spacecraft and
the
third for a hot backup and program development/testing.
Productivity? Efficiency? Whuzzat? I don't know what my time was
billed
at, but I was paid fairly well. As a contract programmer, I loved what
NASA/JPL was doing to me as a taxpayer ....Sort of like what a fairly
good
computer scientist said to an IBM executive at a SHARE conference -
"As a
shareholder, I love how you're screwing me as a customer." And since
his
budget was a few tens of thousands per month, the IBM guy didn't do
anything
except flash a really insincere smile and quickly move off to another
group.
Regards,
Ron Tatum
BTW, I knew I was in deep stuff when I was working on a manufacturing
information system for Systems Manufacturing Division/Components/World
Trade
Corporation and showed the senior CE at the site a message I got from
either
OS/360 Rel 13 or Rel 15/16 that indicated I should raise an APAR with
my IBM
CE and the old f..t asked me "What's an APAR?" Ah, yes, the cobbler's
children syndrome.
----- Original Message -----
Sent: Sunday, October 27, 2002 11:24 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a
interesting
IBM needs to listen
Post by S. Vetter
(biting my tongue to a great extent). You are probably correct in
the
way that Microsoft does things. But you left out one thing: The beta
tester's have to
Post by S. Vetter
PAY for being a tester and to report back the bugs found. Sort of
like
saying "Hey! I want to be a crash test dummy. And I'll even pay to
run the
risk of being
Post by S. Vetter
killed or maimed for life!"
The military I heard also behaves like NASA, they don't use up to
date
software either.
Post by S. Vetter
Scott
----------
Post by tom balabanov
one should realize that most of the modern software companies
don't have
to go through all the exhaustive testing. This expense is passed on to
the
consumer.
Post by S. Vetter
Post by tom balabanov
Microsoft et al. are pushing code out ,getting the major bugs
out and
relying on alpha and beta sites to get the real bugs out,
Post by S. Vetter
Post by tom balabanov
that I think is why they don't spend as much time up front on the
design, they don't have to pay for the debugging so why should they
rigoriously design.
Post by S. Vetter
Post by tom balabanov
That is why NASA spends so much up front,and they don't use
leading
edge software
Post by S. Vetter
Post by tom balabanov
----- Original Message -----
From: marysmiling2002
Sent: Sunday, October 27, 2002 12:38 PM
Subject: [hercules-390] Re: Running a mainframe at home is a
interesting IBM needs to listen
Post by S. Vetter
Post by tom balabanov
Hi John,
Not sure if I'm reading your post right.
Did you mean to say that it is less expensive to construct
software
Post by S. Vetter
Post by tom balabanov
without designing it on paper first, or performing the other
validation steps in the classic process?
Or did you mean that the lower quality of modern software vs.
older
Post by S. Vetter
Post by tom balabanov
software is less costly now than it used to be due to the fact
that
Post by S. Vetter
Post by tom balabanov
computing resources are less expensive?
Because of the change in cost factors, it is now much cheaper to
throw hardware at a problem than software. Because of that,
highly
Post by S. Vetter
Post by tom balabanov
efficient systems like the S/360 architecture and its
descendants
Post by S. Vetter
Post by tom balabanov
are no longer essential for most problems. Designs that are a
lot
Post by S. Vetter
Post by tom balabanov
less clever suffice most of the time. Due to this change,
companies
Post by S. Vetter
Post by tom balabanov
that make software are unwilling to expend resources making it
highly efficient, scalable, or whatever. Why should they?
On the other hand, it's no less expensive for the software to
fail
Post by S. Vetter
Post by tom balabanov
than before. A simple application running on a million desktop
machines still costs hugely in lost data or productivity if it
fails
Post by S. Vetter
Post by tom balabanov
routinely, due to the fact that it is failing on a millon
machines
Post by S. Vetter
Post by tom balabanov
instead of just one (though the cost of each single failure may
be
Post by S. Vetter
Post by tom balabanov
much, much lower than the cost of a single failure in the
mainframe
Post by S. Vetter
Post by tom balabanov
days).
One cost has actually gone up since the PC revolution: support.
It's
Post by S. Vetter
Post by tom balabanov
much more costly to configure, maintain, and support thousands
of
Post by S. Vetter
Post by tom balabanov
desktop boxes than one mainframe. One of the costs of buggy
software
Post by S. Vetter
Post by tom balabanov
is support, and it can be very high due to the fact that it must
be "fixed" over and over again, in many different physical
locations.
Due to support costs, trends in system design now favor
centralizing
Post by S. Vetter
Post by tom balabanov
critical functions in client/server configurations. Once you
move to
Post by S. Vetter
Post by tom balabanov
a single server serving thousands of users, we are right back to
the
Post by S. Vetter
Post by tom balabanov
need for reliability, scalability, and availability we used to
have,
Post by S. Vetter
Post by tom balabanov
only there is much more hardware available more cheaply now, so
we
Post by S. Vetter
Post by tom balabanov
still don't need the kind of efficient use of hardware resources
we
Post by S. Vetter
Post by tom balabanov
once did. At any rate, the place of the mainframe in the modern
world is more often as a server than as a host (with a few well
known exceptions). I would contend that it is still one of the
best
Post by S. Vetter
Post by tom balabanov
server systems in existence, though its high cost of both
procurement and operation mean its only suitable for the large
enterprise. No surprise that this is exactly how IBM is
positioning
Post by S. Vetter
Post by tom balabanov
it these days.
It's worth raising the question of whether the way we write
software
Post by S. Vetter
Post by tom balabanov
now is better than the way we wrote software back when computing
was
Post by S. Vetter
Post by tom balabanov
much more expensive.
Many people (myself included) casually (unscientifically)
believe
Post by S. Vetter
Post by tom balabanov
that it is actually less expensive to follow a cycle of design,
then
Post by S. Vetter
Post by tom balabanov
validation, then lower level design, then validation, and so
forth
Post by S. Vetter
Post by tom balabanov
before coding, and then validating the code on paper before
testing
Post by S. Vetter
Post by tom balabanov
begins. This lifecycle model is called the "waterfall" model by
process experts, and, due in part to the fact that it is older,
it
Post by S. Vetter
Post by tom balabanov
is not considered to be a "cutting edge" lifecycle model.
Nonetheless, these basic realities still exist: Correcting a
problem
Post by S. Vetter
Post by tom balabanov
in a high level design is nearly always much less expensive than
correcting the same problem in a lower level design. Correcting
a
Post by S. Vetter
Post by tom balabanov
problem in a design is nearly always much less expensive than
correcting the problem in an implementation of that design.
Carefully considering high level options before taking a certain
direction can dramatically decrease the amount of work needed to
solve a problem, since some high-level design directions are
much
Post by S. Vetter
Post by tom balabanov
more expensive than others.
All of these realitites would suggest that it is usually cheaper
to
Post by S. Vetter
Post by tom balabanov
use a traditional design process than it is to just start coding
and
Post by S. Vetter
Post by tom balabanov
then go through many iterations until the software system ends
up in
Post by S. Vetter
Post by tom balabanov
its final state.
With respect to qualtiy, as opposed to cost, these realities
also
Post by S. Vetter
Post by tom balabanov
exist: Research suggests that black box testing will find about
40%
Post by S. Vetter
Post by tom balabanov
of the flaws in a software system, whereas reading the code will
find about 90% of them. Furthermore, the bugs found in black box
testing will tend to be different kinds of bugs than those found
in
Post by S. Vetter
Post by tom balabanov
inspection, meaning that using both methods makes it easy to
approach 100% of the bugs in the system. Also, carefully
considered
Post by S. Vetter
Post by tom balabanov
high level design decisions lead to simpler solutions.
Understanding
Post by S. Vetter
Post by tom balabanov
the design thoroughly before coding leads to better organized
implementations (well designed module boundaries, etc.), which
not
Post by S. Vetter
Post by tom balabanov
only lowers maintenance cost, but also results in systems that
are
Post by S. Vetter
Post by tom balabanov
both easier to validate, and, generally, more valid from the
start.
Post by S. Vetter
Post by tom balabanov
All of these realities would suggest that software designed
using
Post by S. Vetter
Post by tom balabanov
traditional methods would usually have higher quality than
software
Post by S. Vetter
Post by tom balabanov
done using "Extreme Software Engineering" type lifecycle models.
Antecdotal evidence supports these observations. Many "old
timers"
Post by S. Vetter
Post by tom balabanov
can tell stories about how teams performed what, by today's
standards, would be considered extraordinary feats of software
engineering very quickly, and with very high quality. This is
partially due to the methodologies, and partially due to the
fact
Post by S. Vetter
Post by tom balabanov
that those older systems (e.g. IBM mainframes) provided much
more
Post by S. Vetter
Post by tom balabanov
enlightened system interfaces and programming support than newer
systems. The latter is, of course, due to economic factors. When
you
Post by S. Vetter
Post by tom balabanov
can charge millions for each system sold, a lot more resources
can
Post by S. Vetter
Post by tom balabanov
be expended when designing it.
Still, since we think it's actually both better AND cheaper to
design things on paper, we should still be able to realize both
gains in development efficiency AND quality when designing and
implementing modern systems to run on modern system
architectures.
Post by S. Vetter
Post by tom balabanov
For documentation supporting these contentions, see the books
"Rapid
Post by S. Vetter
Post by tom balabanov
Development--taming wild software schedules," and "Code
Complete,"
Post by S. Vetter
Post by tom balabanov
both by Steve McConnell, as well as "Writing Solid Code," by
Steve
Post by S. Vetter
Post by tom balabanov
Maguire.
The main reason people like myself complain about modern
practices
Post by S. Vetter
Post by tom balabanov
isn't because we're nostalgic, but because modern economic
realities
Post by S. Vetter
Post by tom balabanov
(which are great, we all agree), have the unfortunate side
effect of
Post by S. Vetter
Post by tom balabanov
not encouraging careful design as much as prior economic
realities
Post by S. Vetter
Post by tom balabanov
did.
Regards,
--Dan
Post by John Alvord
On Sat, Oct 26, 2002 at 07:41:36PM +0000, marysmiling2002
Post by marysmiling2002
I find a common method used by many young programmers
nowadays
Post by S. Vetter
Post by tom balabanov
is to
Post by John Alvord
Post by marysmiling2002
blast out some code as quickly as they can type, compile
it
Post by S. Vetter
Post by tom balabanov
and fix
Post by John Alvord
Post by marysmiling2002
errors until it compiles cleanly, and then start running
it to
Post by S. Vetter
Post by tom balabanov
see
Post by John Alvord
Post by marysmiling2002
if it works. There is little to no human validation of the
code or
Post by John Alvord
Post by marysmiling2002
logic.
Even more sadly, this modern approach seems to discourage
design, in
Post by John Alvord
Post by marysmiling2002
the sense that it is so cheap and easy to compile and run
programs
Post by John Alvord
Post by marysmiling2002
now that it requires a lot of discipline on the part of
the
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
programmer to do designs on paper and put in design
validation
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
effort.
I think you've just described "Extreme Programming."
Bah.
Kids these days.
The factor you may not have considered fully is the dramatic
reduction in
Post by John Alvord
cost of computing. Compared to (say) 1970, the $ cost of human
work has
Post by John Alvord
gone up maybe 4 times and the cost of computation has been
reduced
Post by S. Vetter
Post by tom balabanov
by
Post by John Alvord
(say) 10,000. The obvious compensating strategy is to lean on
the
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
computing side - assuming that minimizing costs is the goal.
If
Post by S. Vetter
Post by tom balabanov
you are
Post by John Alvord
after some other - purity/nostalgic - goal, then cost isn't
part
Post by S. Vetter
Post by tom balabanov
of the
Post by John Alvord
equation...
john alvord
Yahoo! Groups Sponsor
ADVERTISEMENT
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to the Yahoo! Terms of
Service.
Post by S. Vetter
Post by tom balabanov
[Non-text portions of this message have been removed]
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
Post by S. Vetter
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/



------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Ronald Tatum
2002-10-29 01:22:46 UTC
Permalink
Gregg,
I'm not sure what is on board the Galileo, but I wouldn't be surprised if
it's something very similar to the rather obscure RCA-manufactured
microprocessor core on the Voyagers. A little less computing power than a
decent handheld calculator; reason being that the poor thing had to be
certified to survive in a really hostile radiation environment, which takes
a lot of time to do as well as being *very* expensive for a really tiny
market.

Don't know what the shuttles have on board now, but one of the main
systems in the early days was the same basic machine that was used on Apollo
and was the weapons controller and nav computer on the F111: the IBM 4 pi
mini. I do know that (this was ca. 1975) the folks down at Clear Lake that
worked for GE and had to do something with the telemetry systems didn't have
much good to say about IBM's folks when it came to getting clear specs on
what the data streams would look like. I don't know whether the IBMers
didn't really understand what the importancve was or if they were sort of
unofficially trying to sandbag the competition/other contractors on the
Shuttle project.

Personally, I don't have much in the way of "good thoughts" about the
STS; for scientific worth unmanned missions have a distinct advantage.
Besides that, the Shuttle came in way over budget and under in lift
capability so that Galileo had to be substantially modified and limited
because it had to go off on the shuttle instead of Atlas/Centaur like the
Voyagers. There was also a major negative impact on the International Solar
Polar that had been planned as a joint project between NASA and ESA.

But NASA, at least that part at JPL and the folks running DSN, were/are
dedicated and sharp. The TPP I worked on was a bad example or a great one
when considering how *not* to build and maintain major software systems,
depending on what one is trying to argue. It was a little awesome to sit in
the control booth in MCCC and watch, in real time, the pictures being built,
one scan line at a time, on the terminals as Voyagers got nearer and nearer
to Jupiter and those truly strange moons such as Io. Pity the budget
problems disrupted so many careers at JPL.

Regards,
Ron T.
----- Original Message -----
From: "Gregg C Levine" <hansolofalcon-XfrvlLN1Pqtfpb/***@public.gmane.org>
To: <hercules-390-***@public.gmane.org>
Sent: Monday, October 28, 2002 12:04 PM
Subject: RE: [hercules-390] Re: Running a mainframe at home is a interesting
IBM needs to listen


Hello again from Gregg C Levine
**Grins at his collection of things from that area of work.** Ron, you
just said what I wanted to say. Good!!!! But do you know what the
Galileo spacecraft is using for its systems? Or the Space Shuttle? Would
anyone be surprised if I told folk what's really running the Space
Shuttle?
-------------------
Gregg C Levine hansolofalcon-XfrvlLN1Pqtfpb/***@public.gmane.org
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke." Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Monday, October 28, 2002 12:57 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
NASA is different, apparently, depending on where you look. I worked
on the
telemetry preprocessor for the Voyager project (I even have a framed
certificate saying I was awarded the Public Service Group Achievement
Award
for Voyager Ground Data System Development and Operations) at JPL.
The machine was a highly modified Univac MTC (essentially, they
sort of
chopped the MTC in two so they had a string of Univac 1530
uniprocessors)
they somehow acquired from the US Navy. The programming language (and
I'm
using that term *very* loosely) was an assembler that ran on a Moccomp
IV.
The assmebler was generated by a company in Pasadena (no, I'll not
name it -
what I have to say is not a libel, but I do fear being sued for
slander) and
the stupid thing couldn't generate all the instructions the MTC/1530
had. As
a result, there were *lots* of octal constants in the source language
files.
As problems/bugs were found, we had to make fixes (duh) and test
them
on either real spacecraft data or synthetic data streams. We'd load
out
standalone system from tape (no, there wasn't what you'd call an OS
with
apps runnning under it - the whole thing was one giant slobber of
code) and
load our octal patch decks in. For "efficiency", each card held an
address
for the three octal instructions (max) on that card. I found out that
I
could put just *one* patch on a card and key in the assembler source
after
it as sort of a comment. I was told in no uncertain terms that I was
not to
do such a thing; so about once a month we collected all out damned
patches
and hand-backtranslated the octal to assembler source (hopefully we
hadn't
lost our minds yet) and reassembled a new version of the TPP. There
were
three machine strings: one for each of the two Voyager spacecraft and
the
third for a hot backup and program development/testing.
Productivity? Efficiency? Whuzzat? I don't know what my time was
billed
at, but I was paid fairly well. As a contract programmer, I loved what
NASA/JPL was doing to me as a taxpayer ....Sort of like what a fairly
good
computer scientist said to an IBM executive at a SHARE conference -
"As a
shareholder, I love how you're screwing me as a customer." And since
his
budget was a few tens of thousands per month, the IBM guy didn't do
anything
except flash a really insincere smile and quickly move off to another
group.
Regards,
Ron Tatum
BTW, I knew I was in deep stuff when I was working on a manufacturing
information system for Systems Manufacturing Division/Components/World
Trade
Corporation and showed the senior CE at the site a message I got from
either
OS/360 Rel 13 or Rel 15/16 that indicated I should raise an APAR with
my IBM
CE and the old f..t asked me "What's an APAR?" Ah, yes, the cobbler's
children syndrome.
----- Original Message -----
Sent: Sunday, October 27, 2002 11:24 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a
interesting
IBM needs to listen
Post by S. Vetter
(biting my tongue to a great extent). You are probably correct in
the
way that Microsoft does things. But you left out one thing: The beta
tester's have to
Post by S. Vetter
PAY for being a tester and to report back the bugs found. Sort of
like
saying "Hey! I want to be a crash test dummy. And I'll even pay to
run the
risk of being
Post by S. Vetter
killed or maimed for life!"
The military I heard also behaves like NASA, they don't use up to
date
software either.
Post by S. Vetter
Scott
----------
Post by tom balabanov
one should realize that most of the modern software companies
don't have
to go through all the exhaustive testing. This expense is passed on to
the
consumer.
Post by S. Vetter
Post by tom balabanov
Microsoft et al. are pushing code out ,getting the major bugs
out and
relying on alpha and beta sites to get the real bugs out,
Post by S. Vetter
Post by tom balabanov
that I think is why they don't spend as much time up front on the
design, they don't have to pay for the debugging so why should they
rigoriously design.
Post by S. Vetter
Post by tom balabanov
That is why NASA spends so much up front,and they don't use
leading
edge software
Post by S. Vetter
Post by tom balabanov
----- Original Message -----
From: marysmiling2002
Sent: Sunday, October 27, 2002 12:38 PM
Subject: [hercules-390] Re: Running a mainframe at home is a
interesting IBM needs to listen
Post by S. Vetter
Post by tom balabanov
Hi John,
Not sure if I'm reading your post right.
Did you mean to say that it is less expensive to construct
software
Post by S. Vetter
Post by tom balabanov
without designing it on paper first, or performing the other
validation steps in the classic process?
Or did you mean that the lower quality of modern software vs.
older
Post by S. Vetter
Post by tom balabanov
software is less costly now than it used to be due to the fact
that
Post by S. Vetter
Post by tom balabanov
computing resources are less expensive?
Because of the change in cost factors, it is now much cheaper to
throw hardware at a problem than software. Because of that,
highly
Post by S. Vetter
Post by tom balabanov
efficient systems like the S/360 architecture and its
descendants
Post by S. Vetter
Post by tom balabanov
are no longer essential for most problems. Designs that are a
lot
Post by S. Vetter
Post by tom balabanov
less clever suffice most of the time. Due to this change,
companies
Post by S. Vetter
Post by tom balabanov
that make software are unwilling to expend resources making it
highly efficient, scalable, or whatever. Why should they?
On the other hand, it's no less expensive for the software to
fail
Post by S. Vetter
Post by tom balabanov
than before. A simple application running on a million desktop
machines still costs hugely in lost data or productivity if it
fails
Post by S. Vetter
Post by tom balabanov
routinely, due to the fact that it is failing on a millon
machines
Post by S. Vetter
Post by tom balabanov
instead of just one (though the cost of each single failure may
be
Post by S. Vetter
Post by tom balabanov
much, much lower than the cost of a single failure in the
mainframe
Post by S. Vetter
Post by tom balabanov
days).
One cost has actually gone up since the PC revolution: support.
It's
Post by S. Vetter
Post by tom balabanov
much more costly to configure, maintain, and support thousands
of
Post by S. Vetter
Post by tom balabanov
desktop boxes than one mainframe. One of the costs of buggy
software
Post by S. Vetter
Post by tom balabanov
is support, and it can be very high due to the fact that it must
be "fixed" over and over again, in many different physical
locations.
Due to support costs, trends in system design now favor
centralizing
Post by S. Vetter
Post by tom balabanov
critical functions in client/server configurations. Once you
move to
Post by S. Vetter
Post by tom balabanov
a single server serving thousands of users, we are right back to
the
Post by S. Vetter
Post by tom balabanov
need for reliability, scalability, and availability we used to
have,
Post by S. Vetter
Post by tom balabanov
only there is much more hardware available more cheaply now, so
we
Post by S. Vetter
Post by tom balabanov
still don't need the kind of efficient use of hardware resources
we
Post by S. Vetter
Post by tom balabanov
once did. At any rate, the place of the mainframe in the modern
world is more often as a server than as a host (with a few well
known exceptions). I would contend that it is still one of the
best
Post by S. Vetter
Post by tom balabanov
server systems in existence, though its high cost of both
procurement and operation mean its only suitable for the large
enterprise. No surprise that this is exactly how IBM is
positioning
Post by S. Vetter
Post by tom balabanov
it these days.
It's worth raising the question of whether the way we write
software
Post by S. Vetter
Post by tom balabanov
now is better than the way we wrote software back when computing
was
Post by S. Vetter
Post by tom balabanov
much more expensive.
Many people (myself included) casually (unscientifically)
believe
Post by S. Vetter
Post by tom balabanov
that it is actually less expensive to follow a cycle of design,
then
Post by S. Vetter
Post by tom balabanov
validation, then lower level design, then validation, and so
forth
Post by S. Vetter
Post by tom balabanov
before coding, and then validating the code on paper before
testing
Post by S. Vetter
Post by tom balabanov
begins. This lifecycle model is called the "waterfall" model by
process experts, and, due in part to the fact that it is older,
it
Post by S. Vetter
Post by tom balabanov
is not considered to be a "cutting edge" lifecycle model.
Nonetheless, these basic realities still exist: Correcting a
problem
Post by S. Vetter
Post by tom balabanov
in a high level design is nearly always much less expensive than
correcting the same problem in a lower level design. Correcting
a
Post by S. Vetter
Post by tom balabanov
problem in a design is nearly always much less expensive than
correcting the problem in an implementation of that design.
Carefully considering high level options before taking a certain
direction can dramatically decrease the amount of work needed to
solve a problem, since some high-level design directions are
much
Post by S. Vetter
Post by tom balabanov
more expensive than others.
All of these realitites would suggest that it is usually cheaper
to
Post by S. Vetter
Post by tom balabanov
use a traditional design process than it is to just start coding
and
Post by S. Vetter
Post by tom balabanov
then go through many iterations until the software system ends
up in
Post by S. Vetter
Post by tom balabanov
its final state.
With respect to qualtiy, as opposed to cost, these realities
also
Post by S. Vetter
Post by tom balabanov
exist: Research suggests that black box testing will find about
40%
Post by S. Vetter
Post by tom balabanov
of the flaws in a software system, whereas reading the code will
find about 90% of them. Furthermore, the bugs found in black box
testing will tend to be different kinds of bugs than those found
in
Post by S. Vetter
Post by tom balabanov
inspection, meaning that using both methods makes it easy to
approach 100% of the bugs in the system. Also, carefully
considered
Post by S. Vetter
Post by tom balabanov
high level design decisions lead to simpler solutions.
Understanding
Post by S. Vetter
Post by tom balabanov
the design thoroughly before coding leads to better organized
implementations (well designed module boundaries, etc.), which
not
Post by S. Vetter
Post by tom balabanov
only lowers maintenance cost, but also results in systems that
are
Post by S. Vetter
Post by tom balabanov
both easier to validate, and, generally, more valid from the
start.
Post by S. Vetter
Post by tom balabanov
All of these realities would suggest that software designed
using
Post by S. Vetter
Post by tom balabanov
traditional methods would usually have higher quality than
software
Post by S. Vetter
Post by tom balabanov
done using "Extreme Software Engineering" type lifecycle models.
Antecdotal evidence supports these observations. Many "old
timers"
Post by S. Vetter
Post by tom balabanov
can tell stories about how teams performed what, by today's
standards, would be considered extraordinary feats of software
engineering very quickly, and with very high quality. This is
partially due to the methodologies, and partially due to the
fact
Post by S. Vetter
Post by tom balabanov
that those older systems (e.g. IBM mainframes) provided much
more
Post by S. Vetter
Post by tom balabanov
enlightened system interfaces and programming support than newer
systems. The latter is, of course, due to economic factors. When
you
Post by S. Vetter
Post by tom balabanov
can charge millions for each system sold, a lot more resources
can
Post by S. Vetter
Post by tom balabanov
be expended when designing it.
Still, since we think it's actually both better AND cheaper to
design things on paper, we should still be able to realize both
gains in development efficiency AND quality when designing and
implementing modern systems to run on modern system
architectures.
Post by S. Vetter
Post by tom balabanov
For documentation supporting these contentions, see the books
"Rapid
Post by S. Vetter
Post by tom balabanov
Development--taming wild software schedules," and "Code
Complete,"
Post by S. Vetter
Post by tom balabanov
both by Steve McConnell, as well as "Writing Solid Code," by
Steve
Post by S. Vetter
Post by tom balabanov
Maguire.
The main reason people like myself complain about modern
practices
Post by S. Vetter
Post by tom balabanov
isn't because we're nostalgic, but because modern economic
realities
Post by S. Vetter
Post by tom balabanov
(which are great, we all agree), have the unfortunate side
effect of
Post by S. Vetter
Post by tom balabanov
not encouraging careful design as much as prior economic
realities
Post by S. Vetter
Post by tom balabanov
did.
Regards,
--Dan
Post by John Alvord
On Sat, Oct 26, 2002 at 07:41:36PM +0000, marysmiling2002
Post by marysmiling2002
I find a common method used by many young programmers
nowadays
Post by S. Vetter
Post by tom balabanov
is to
Post by John Alvord
Post by marysmiling2002
blast out some code as quickly as they can type, compile
it
Post by S. Vetter
Post by tom balabanov
and fix
Post by John Alvord
Post by marysmiling2002
errors until it compiles cleanly, and then start running
it to
Post by S. Vetter
Post by tom balabanov
see
Post by John Alvord
Post by marysmiling2002
if it works. There is little to no human validation of the
code or
Post by John Alvord
Post by marysmiling2002
logic.
Even more sadly, this modern approach seems to discourage
design, in
Post by John Alvord
Post by marysmiling2002
the sense that it is so cheap and easy to compile and run
programs
Post by John Alvord
Post by marysmiling2002
now that it requires a lot of discipline on the part of
the
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
programmer to do designs on paper and put in design
validation
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
effort.
I think you've just described "Extreme Programming."
Bah.
Kids these days.
The factor you may not have considered fully is the dramatic
reduction in
Post by John Alvord
cost of computing. Compared to (say) 1970, the $ cost of human
work has
Post by John Alvord
gone up maybe 4 times and the cost of computation has been
reduced
Post by S. Vetter
Post by tom balabanov
by
Post by John Alvord
(say) 10,000. The obvious compensating strategy is to lean on
the
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
computing side - assuming that minimizing costs is the goal.
If
Post by S. Vetter
Post by tom balabanov
you are
Post by John Alvord
after some other - purity/nostalgic - goal, then cost isn't
part
Post by S. Vetter
Post by tom balabanov
of the
Post by John Alvord
equation...
john alvord
Yahoo! Groups Sponsor
ADVERTISEMENT
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to the Yahoo! Terms of
Service.
Post by S. Vetter
Post by tom balabanov
[Non-text portions of this message have been removed]
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
Post by S. Vetter
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
Gregg C Levine
2002-10-29 01:44:24 UTC
Permalink
Hello again from Gregg C Levine
Good so far. Yes, the Galileo is wearing a cluster of RCA CDP1802s, and
they are the rad hard variant. The Shuttle? Three of them are wearing
processors based on the ones that we model inside Hercules. I don't know
what the Endeavor is wearing. What makes the existence of Hercules
important, is that it's bringing people like us together. And I agree
with you, regarding the thoughts you've espoused so far.
-------------------
Gregg C Levine hansolofalcon-XfrvlLN1Pqtfpb/***@public.gmane.org
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke."  Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Monday, October 28, 2002 8:23 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
Gregg,
I'm not sure what is on board the Galileo, but I wouldn't be surprised if
it's something very similar to the rather obscure RCA-manufactured
microprocessor core on the Voyagers. A little less computing power than a
decent handheld calculator; reason being that the poor thing had to be
certified to survive in a really hostile radiation environment, which takes
a lot of time to do as well as being *very* expensive for a really tiny
market.
Don't know what the shuttles have on board now, but one of the main
systems in the early days was the same basic machine that was used on Apollo
and was the weapons controller and nav computer on the F111: the IBM 4 pi
mini. I do know that (this was ca. 1975) the folks down at Clear Lake that
worked for GE and had to do something with the telemetry systems didn't have
much good to say about IBM's folks when it came to getting clear specs on
what the data streams would look like. I don't know whether the IBMers
didn't really understand what the importancve was or if they were sort of
unofficially trying to sandbag the competition/other contractors on the
Shuttle project.
Personally, I don't have much in the way of "good thoughts" about the
STS; for scientific worth unmanned missions have a distinct advantage.
Besides that, the Shuttle came in way over budget and under in lift
capability so that Galileo had to be substantially modified and limited
because it had to go off on the shuttle instead of Atlas/Centaur like the
Voyagers. There was also a major negative impact on the International Solar
Polar that had been planned as a joint project between NASA and ESA.
But NASA, at least that part at JPL and the folks running DSN, were/are
dedicated and sharp. The TPP I worked on was a bad example or a great one
when considering how *not* to build and maintain major software systems,
depending on what one is trying to argue. It was a little awesome to sit in
the control booth in MCCC and watch, in real time, the pictures being built,
one scan line at a time, on the terminals as Voyagers got nearer and nearer
to Jupiter and those truly strange moons such as Io. Pity the budget
problems disrupted so many careers at JPL.
Regards,
Ron T.
----- Original Message -----
Sent: Monday, October 28, 2002 12:04 PM
Subject: RE: [hercules-390] Re: Running a mainframe at home is a interesting
IBM needs to listen
Hello again from Gregg C Levine
**Grins at his collection of things from that area of work.** Ron, you
just said what I wanted to say. Good!!!! But do you know what the
Galileo spacecraft is using for its systems? Or the Space Shuttle? Would
anyone be surprised if I told folk what's really running the Space
Shuttle?
-------------------
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke." Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Monday, October 28, 2002 12:57 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
NASA is different, apparently, depending on where you look. I worked
on the
telemetry preprocessor for the Voyager project (I even have a framed
certificate saying I was awarded the Public Service Group
Achievement
Award
for Voyager Ground Data System Development and Operations) at JPL.
The machine was a highly modified Univac MTC (essentially, they
sort of
chopped the MTC in two so they had a string of Univac 1530
uniprocessors)
they somehow acquired from the US Navy. The programming language (and
I'm
using that term *very* loosely) was an assembler that ran on a Moccomp
IV.
The assmebler was generated by a company in Pasadena (no, I'll not
name it -
what I have to say is not a libel, but I do fear being sued for
slander) and
the stupid thing couldn't generate all the instructions the MTC/1530
had. As
a result, there were *lots* of octal constants in the source
language
files.
As problems/bugs were found, we had to make fixes (duh) and test
them
on either real spacecraft data or synthetic data streams. We'd load
out
standalone system from tape (no, there wasn't what you'd call an OS
with
apps runnning under it - the whole thing was one giant slobber of
code) and
load our octal patch decks in. For "efficiency", each card held an
address
for the three octal instructions (max) on that card. I found out that
I
could put just *one* patch on a card and key in the assembler source
after
it as sort of a comment. I was told in no uncertain terms that I was
not to
do such a thing; so about once a month we collected all out damned
patches
and hand-backtranslated the octal to assembler source (hopefully we
hadn't
lost our minds yet) and reassembled a new version of the TPP. There
were
three machine strings: one for each of the two Voyager spacecraft and
the
third for a hot backup and program development/testing.
Productivity? Efficiency? Whuzzat? I don't know what my time was
billed
at, but I was paid fairly well. As a contract programmer, I loved what
NASA/JPL was doing to me as a taxpayer ....Sort of like what a fairly
good
computer scientist said to an IBM executive at a SHARE conference -
"As a
shareholder, I love how you're screwing me as a customer." And since
his
budget was a few tens of thousands per month, the IBM guy didn't do
anything
except flash a really insincere smile and quickly move off to another
group.
Regards,
Ron Tatum
BTW, I knew I was in deep stuff when I was working on a
manufacturing
information system for Systems Manufacturing
Division/Components/World
Trade
Corporation and showed the senior CE at the site a message I got from
either
OS/360 Rel 13 or Rel 15/16 that indicated I should raise an APAR with
my IBM
CE and the old f..t asked me "What's an APAR?" Ah, yes, the
cobbler's
children syndrome.
----- Original Message -----
Sent: Sunday, October 27, 2002 11:24 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a
interesting
IBM needs to listen
Post by S. Vetter
(biting my tongue to a great extent). You are probably correct in
the
way that Microsoft does things. But you left out one thing: The beta
tester's have to
Post by S. Vetter
PAY for being a tester and to report back the bugs found. Sort of
like
saying "Hey! I want to be a crash test dummy. And I'll even pay to
run the
risk of being
Post by S. Vetter
killed or maimed for life!"
The military I heard also behaves like NASA, they don't use up to
date
software either.
Post by S. Vetter
Scott
----------
Post by tom balabanov
one should realize that most of the modern software companies
don't have
to go through all the exhaustive testing. This expense is passed on to
the
consumer.
Post by S. Vetter
Post by tom balabanov
Microsoft et al. are pushing code out ,getting the major bugs
out and
relying on alpha and beta sites to get the real bugs out,
Post by S. Vetter
Post by tom balabanov
that I think is why they don't spend as much time up front on the
design, they don't have to pay for the debugging so why should they
rigoriously design.
Post by S. Vetter
Post by tom balabanov
That is why NASA spends so much up front,and they don't use
leading
edge software
Post by S. Vetter
Post by tom balabanov
----- Original Message -----
From: marysmiling2002
Sent: Sunday, October 27, 2002 12:38 PM
Subject: [hercules-390] Re: Running a mainframe at home is a
interesting IBM needs to listen
Post by S. Vetter
Post by tom balabanov
Hi John,
Not sure if I'm reading your post right.
Did you mean to say that it is less expensive to construct
software
Post by S. Vetter
Post by tom balabanov
without designing it on paper first, or performing the other
validation steps in the classic process?
Or did you mean that the lower quality of modern software vs.
older
Post by S. Vetter
Post by tom balabanov
software is less costly now than it used to be due to the fact
that
Post by S. Vetter
Post by tom balabanov
computing resources are less expensive?
Because of the change in cost factors, it is now much cheaper to
throw hardware at a problem than software. Because of that,
highly
Post by S. Vetter
Post by tom balabanov
efficient systems like the S/360 architecture and its
descendants
Post by S. Vetter
Post by tom balabanov
are no longer essential for most problems. Designs that are a
lot
Post by S. Vetter
Post by tom balabanov
less clever suffice most of the time. Due to this change,
companies
Post by S. Vetter
Post by tom balabanov
that make software are unwilling to expend resources making it
highly efficient, scalable, or whatever. Why should they?
On the other hand, it's no less expensive for the software to
fail
Post by S. Vetter
Post by tom balabanov
than before. A simple application running on a million desktop
machines still costs hugely in lost data or productivity if it
fails
Post by S. Vetter
Post by tom balabanov
routinely, due to the fact that it is failing on a millon
machines
Post by S. Vetter
Post by tom balabanov
instead of just one (though the cost of each single failure may
be
Post by S. Vetter
Post by tom balabanov
much, much lower than the cost of a single failure in the
mainframe
Post by S. Vetter
Post by tom balabanov
days).
support.
It's
Post by S. Vetter
Post by tom balabanov
much more costly to configure, maintain, and support thousands
of
Post by S. Vetter
Post by tom balabanov
desktop boxes than one mainframe. One of the costs of buggy
software
Post by S. Vetter
Post by tom balabanov
is support, and it can be very high due to the fact that it must
be "fixed" over and over again, in many different physical
locations.
Due to support costs, trends in system design now favor
centralizing
Post by S. Vetter
Post by tom balabanov
critical functions in client/server configurations. Once you
move to
Post by S. Vetter
Post by tom balabanov
a single server serving thousands of users, we are right back to
the
Post by S. Vetter
Post by tom balabanov
need for reliability, scalability, and availability we used to
have,
Post by S. Vetter
Post by tom balabanov
only there is much more hardware available more cheaply now, so
we
Post by S. Vetter
Post by tom balabanov
still don't need the kind of efficient use of hardware
resources
we
Post by S. Vetter
Post by tom balabanov
once did. At any rate, the place of the mainframe in the modern
world is more often as a server than as a host (with a few well
known exceptions). I would contend that it is still one of the
best
Post by S. Vetter
Post by tom balabanov
server systems in existence, though its high cost of both
procurement and operation mean its only suitable for the large
enterprise. No surprise that this is exactly how IBM is
positioning
Post by S. Vetter
Post by tom balabanov
it these days.
It's worth raising the question of whether the way we write
software
Post by S. Vetter
Post by tom balabanov
now is better than the way we wrote software back when
computing
was
Post by S. Vetter
Post by tom balabanov
much more expensive.
Many people (myself included) casually (unscientifically)
believe
Post by S. Vetter
Post by tom balabanov
that it is actually less expensive to follow a cycle of design,
then
Post by S. Vetter
Post by tom balabanov
validation, then lower level design, then validation, and so
forth
Post by S. Vetter
Post by tom balabanov
before coding, and then validating the code on paper before
testing
Post by S. Vetter
Post by tom balabanov
begins. This lifecycle model is called the "waterfall" model by
process experts, and, due in part to the fact that it is older,
it
Post by S. Vetter
Post by tom balabanov
is not considered to be a "cutting edge" lifecycle model.
Nonetheless, these basic realities still exist: Correcting a
problem
Post by S. Vetter
Post by tom balabanov
in a high level design is nearly always much less expensive than
correcting the same problem in a lower level design.
Correcting
a
Post by S. Vetter
Post by tom balabanov
problem in a design is nearly always much less expensive than
correcting the problem in an implementation of that design.
Carefully considering high level options before taking a certain
direction can dramatically decrease the amount of work needed to
solve a problem, since some high-level design directions are
much
Post by S. Vetter
Post by tom balabanov
more expensive than others.
All of these realitites would suggest that it is usually cheaper
to
Post by S. Vetter
Post by tom balabanov
use a traditional design process than it is to just start coding
and
Post by S. Vetter
Post by tom balabanov
then go through many iterations until the software system ends
up in
Post by S. Vetter
Post by tom balabanov
its final state.
With respect to qualtiy, as opposed to cost, these realities
also
Post by S. Vetter
Post by tom balabanov
exist: Research suggests that black box testing will find about
40%
Post by S. Vetter
Post by tom balabanov
of the flaws in a software system, whereas reading the code will
find about 90% of them. Furthermore, the bugs found in black box
testing will tend to be different kinds of bugs than those found
in
Post by S. Vetter
Post by tom balabanov
inspection, meaning that using both methods makes it easy to
approach 100% of the bugs in the system. Also, carefully
considered
Post by S. Vetter
Post by tom balabanov
high level design decisions lead to simpler solutions.
Understanding
Post by S. Vetter
Post by tom balabanov
the design thoroughly before coding leads to better organized
implementations (well designed module boundaries, etc.), which
not
Post by S. Vetter
Post by tom balabanov
only lowers maintenance cost, but also results in systems that
are
Post by S. Vetter
Post by tom balabanov
both easier to validate, and, generally, more valid from the
start.
Post by S. Vetter
Post by tom balabanov
All of these realities would suggest that software designed
using
Post by S. Vetter
Post by tom balabanov
traditional methods would usually have higher quality than
software
Post by S. Vetter
Post by tom balabanov
done using "Extreme Software Engineering" type lifecycle models.
Antecdotal evidence supports these observations. Many "old
timers"
Post by S. Vetter
Post by tom balabanov
can tell stories about how teams performed what, by today's
standards, would be considered extraordinary feats of software
engineering very quickly, and with very high quality. This is
partially due to the methodologies, and partially due to the
fact
Post by S. Vetter
Post by tom balabanov
that those older systems (e.g. IBM mainframes) provided much
more
Post by S. Vetter
Post by tom balabanov
enlightened system interfaces and programming support than newer
systems. The latter is, of course, due to economic factors. When
you
Post by S. Vetter
Post by tom balabanov
can charge millions for each system sold, a lot more resources
can
Post by S. Vetter
Post by tom balabanov
be expended when designing it.
Still, since we think it's actually both better AND cheaper to
design things on paper, we should still be able to realize both
gains in development efficiency AND quality when designing and
implementing modern systems to run on modern system
architectures.
Post by S. Vetter
Post by tom balabanov
For documentation supporting these contentions, see the books
"Rapid
Post by S. Vetter
Post by tom balabanov
Development--taming wild software schedules," and "Code
Complete,"
Post by S. Vetter
Post by tom balabanov
both by Steve McConnell, as well as "Writing Solid Code," by
Steve
Post by S. Vetter
Post by tom balabanov
Maguire.
The main reason people like myself complain about modern
practices
Post by S. Vetter
Post by tom balabanov
isn't because we're nostalgic, but because modern economic
realities
Post by S. Vetter
Post by tom balabanov
(which are great, we all agree), have the unfortunate side
effect of
Post by S. Vetter
Post by tom balabanov
not encouraging careful design as much as prior economic
realities
Post by S. Vetter
Post by tom balabanov
did.
Regards,
--Dan
Post by John Alvord
On Sat, Oct 26, 2002 at 07:41:36PM +0000, marysmiling2002
Post by marysmiling2002
I find a common method used by many young programmers
nowadays
Post by S. Vetter
Post by tom balabanov
is to
Post by John Alvord
Post by marysmiling2002
blast out some code as quickly as they can type, compile
it
Post by S. Vetter
Post by tom balabanov
and fix
Post by John Alvord
Post by marysmiling2002
errors until it compiles cleanly, and then start running
it to
Post by S. Vetter
Post by tom balabanov
see
Post by John Alvord
Post by marysmiling2002
if it works. There is little to no human validation of the
code or
Post by John Alvord
Post by marysmiling2002
logic.
Even more sadly, this modern approach seems to
discourage
Post by S. Vetter
Post by tom balabanov
design, in
Post by John Alvord
Post by marysmiling2002
the sense that it is so cheap and easy to compile and run
programs
Post by John Alvord
Post by marysmiling2002
now that it requires a lot of discipline on the part of
the
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
programmer to do designs on paper and put in design
validation
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
effort.
I think you've just described "Extreme Programming."
Bah.
Kids these days.
The factor you may not have considered fully is the dramatic
reduction in
Post by John Alvord
cost of computing. Compared to (say) 1970, the $ cost of human
work has
Post by John Alvord
gone up maybe 4 times and the cost of computation has been
reduced
Post by S. Vetter
Post by tom balabanov
by
Post by John Alvord
(say) 10,000. The obvious compensating strategy is to lean on
the
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
computing side - assuming that minimizing costs is the goal.
If
Post by S. Vetter
Post by tom balabanov
you are
Post by John Alvord
after some other - purity/nostalgic - goal, then cost isn't
part
Post by S. Vetter
Post by tom balabanov
of the
Post by John Alvord
equation...
john alvord
Yahoo! Groups Sponsor
ADVERTISEMENT
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to the Yahoo! Terms of
Service.
Post by S. Vetter
Post by tom balabanov
[Non-text portions of this message have been removed]
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
Post by S. Vetter
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/



------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
marysmiling2002
2002-10-29 04:07:35 UTC
Permalink
What an interesting thread!

Speaking of the 370/390 in aerospace applications, I read somewhere
that the air traffic control system is still all 360 assembly
language, but it runs on S/390 hardware these days. Anyone know if
that's true?

Somewhere I read that there was a movement afoot to rewrite it in
ADA that failed.

--Dan
Post by Gregg C Levine
Hello again from Gregg C Levine
Good so far. Yes, the Galileo is wearing a cluster of RCA
CDP1802s, and
Post by Gregg C Levine
they are the rad hard variant. The Shuttle? Three of them are
wearing
Post by Gregg C Levine
processors based on the ones that we model inside Hercules. I
don't know
Post by Gregg C Levine
what the Endeavor is wearing. What makes the existence of Hercules
important, is that it's bringing people like us together. And I agree
with you, regarding the thoughts you've espoused so far.
-------------------
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke."  Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Monday, October 28, 2002 8:23 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
Gregg,
I'm not sure what is on board the Galileo, but I wouldn't be
surprised if
it's something very similar to the rather obscure RCA-
manufactured
Post by Gregg C Levine
microprocessor core on the Voyagers. A little less computing
power
Post by Gregg C Levine
than a
decent handheld calculator; reason being that the poor thing had to be
certified to survive in a really hostile radiation environment,
which
Post by Gregg C Levine
takes
a lot of time to do as well as being *very* expensive for a
really
Post by Gregg C Levine
tiny
market.
Don't know what the shuttles have on board now, but one of
the
Post by Gregg C Levine
main
systems in the early days was the same basic machine that was
used on
Post by Gregg C Levine
Apollo
and was the weapons controller and nav computer on the F111: the
IBM 4
Post by Gregg C Levine
pi
mini. I do know that (this was ca. 1975) the folks down at Clear
Lake
Post by Gregg C Levine
that
worked for GE and had to do something with the telemetry systems
didn't have
much good to say about IBM's folks when it came to getting clear
specs
Post by Gregg C Levine
on
what the data streams would look like. I don't know whether the IBMers
didn't really understand what the importancve was or if they
were sort
Post by Gregg C Levine
of
unofficially trying to sandbag the competition/other contractors
on
Post by Gregg C Levine
the
Shuttle project.
Personally, I don't have much in the way of "good thoughts"
about
Post by Gregg C Levine
the
STS; for scientific worth unmanned missions have a distinct
advantage.
Post by Gregg C Levine
Besides that, the Shuttle came in way over budget and under in lift
capability so that Galileo had to be substantially modified and
limited
because it had to go off on the shuttle instead of Atlas/Centaur
like
Post by Gregg C Levine
the
Voyagers. There was also a major negative impact on the
International
Post by Gregg C Levine
Solar
Polar that had been planned as a joint project between NASA and ESA.
But NASA, at least that part at JPL and the folks running
DSN,
Post by Gregg C Levine
were/are
dedicated and sharp. The TPP I worked on was a bad example or a
great
Post by Gregg C Levine
one
when considering how *not* to build and maintain major software
systems,
depending on what one is trying to argue. It was a little
awesome to
Post by Gregg C Levine
sit in
the control booth in MCCC and watch, in real time, the pictures
being
Post by Gregg C Levine
built,
one scan line at a time, on the terminals as Voyagers got nearer
and
Post by Gregg C Levine
nearer
to Jupiter and those truly strange moons such as Io. Pity the budget
problems disrupted so many careers at JPL.
Regards,
Ron T.
----- Original Message -----
Sent: Monday, October 28, 2002 12:04 PM
Subject: RE: [hercules-390] Re: Running a mainframe at home is a
interesting
IBM needs to listen
Hello again from Gregg C Levine
**Grins at his collection of things from that area of work.** Ron, you
just said what I wanted to say. Good!!!! But do you know what the
Galileo spacecraft is using for its systems? Or the Space
Shuttle?
Post by Gregg C Levine
Would
anyone be surprised if I told folk what's really running the
Space
Post by Gregg C Levine
Shuttle?
-------------------
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke." Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Monday, October 28, 2002 12:57 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
NASA is different, apparently, depending on where you look. I worked
on the
telemetry preprocessor for the Voyager project (I even have a framed
certificate saying I was awarded the Public Service Group
Achievement
Award
for Voyager Ground Data System Development and Operations) at JPL.
The machine was a highly modified Univac MTC (essentially, they
sort of
chopped the MTC in two so they had a string of Univac 1530
uniprocessors)
they somehow acquired from the US Navy. The programming
language
Post by Gregg C Levine
(and
I'm
using that term *very* loosely) was an assembler that ran on a
Moccomp
IV.
The assmebler was generated by a company in Pasadena (no, I'll not
name it -
what I have to say is not a libel, but I do fear being sued for
slander) and
the stupid thing couldn't generate all the instructions the MTC/1530
had. As
a result, there were *lots* of octal constants in the source
language
files.
As problems/bugs were found, we had to make fixes (duh)
and
Post by Gregg C Levine
test
them
on either real spacecraft data or synthetic data streams. We'd load
out
standalone system from tape (no, there wasn't what you'd call an OS
with
apps runnning under it - the whole thing was one giant slobber of
code) and
load our octal patch decks in. For "efficiency", each card
held an
Post by Gregg C Levine
address
for the three octal instructions (max) on that card. I found
out
Post by Gregg C Levine
that
I
could put just *one* patch on a card and key in the assembler source
after
it as sort of a comment. I was told in no uncertain terms that I was
not to
do such a thing; so about once a month we collected all out damned
patches
and hand-backtranslated the octal to assembler source
(hopefully we
Post by Gregg C Levine
hadn't
lost our minds yet) and reassembled a new version of the TPP. There
were
three machine strings: one for each of the two Voyager
spacecraft
Post by Gregg C Levine
and
the
third for a hot backup and program development/testing.
Productivity? Efficiency? Whuzzat? I don't know what my
time
Post by Gregg C Levine
was
billed
at, but I was paid fairly well. As a contract programmer, I
loved
Post by Gregg C Levine
what
NASA/JPL was doing to me as a taxpayer ....Sort of like what a
fairly
good
computer scientist said to an IBM executive at a SHARE
conference -
Post by Gregg C Levine
"As a
shareholder, I love how you're screwing me as a customer." And since
his
budget was a few tens of thousands per month, the IBM guy
didn't do
Post by Gregg C Levine
anything
except flash a really insincere smile and quickly move off to
another
group.
Regards,
Ron Tatum
BTW, I knew I was in deep stuff when I was working on a
manufacturing
information system for Systems Manufacturing
Division/Components/World
Trade
Corporation and showed the senior CE at the site a message I
got
Post by Gregg C Levine
from
either
OS/360 Rel 13 or Rel 15/16 that indicated I should raise an
APAR
Post by Gregg C Levine
with
my IBM
CE and the old f..t asked me "What's an APAR?" Ah, yes, the
cobbler's
children syndrome.
----- Original Message -----
Sent: Sunday, October 27, 2002 11:24 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a
interesting
IBM needs to listen
Post by S. Vetter
(biting my tongue to a great extent). You are probably
correct
Post by Gregg C Levine
in
the
The
Post by Gregg C Levine
beta
tester's have to
Post by S. Vetter
PAY for being a tester and to report back the bugs found.
Sort of
Post by Gregg C Levine
like
saying "Hey! I want to be a crash test dummy. And I'll even pay to
run the
risk of being
Post by S. Vetter
killed or maimed for life!"
The military I heard also behaves like NASA, they don't
use up
Post by Gregg C Levine
to
date
software either.
Post by S. Vetter
Scott
----------
Post by tom balabanov
one should realize that most of the modern software
companies
Post by Gregg C Levine
don't have
to go through all the exhaustive testing. This expense is
passed on
Post by Gregg C Levine
to
the
consumer.
Post by S. Vetter
Post by tom balabanov
Microsoft et al. are pushing code out ,getting the major bugs
out and
relying on alpha and beta sites to get the real bugs out,
Post by S. Vetter
Post by tom balabanov
that I think is why they don't spend as much time up
front on
Post by Gregg C Levine
the
design, they don't have to pay for the debugging so why should they
rigoriously design.
Post by S. Vetter
Post by tom balabanov
That is why NASA spends so much up front,and they don't use
leading
edge software
Post by S. Vetter
Post by tom balabanov
----- Original Message -----
From: marysmiling2002
Sent: Sunday, October 27, 2002 12:38 PM
Subject: [hercules-390] Re: Running a mainframe at home is a
interesting IBM needs to listen
Post by S. Vetter
Post by tom balabanov
Hi John,
Not sure if I'm reading your post right.
Did you mean to say that it is less expensive to
construct
Post by Gregg C Levine
software
Post by S. Vetter
Post by tom balabanov
without designing it on paper first, or performing the other
validation steps in the classic process?
Or did you mean that the lower quality of modern
software vs.
Post by Gregg C Levine
older
Post by S. Vetter
Post by tom balabanov
software is less costly now than it used to be due to the fact
that
Post by S. Vetter
Post by tom balabanov
computing resources are less expensive?
It seems to me there are two main themes to this
Because of the change in cost factors, it is now much
cheaper
Post by Gregg C Levine
to
Post by S. Vetter
Post by tom balabanov
throw hardware at a problem than software. Because of that,
highly
Post by S. Vetter
Post by tom balabanov
efficient systems like the S/360 architecture and its
descendants
Post by S. Vetter
Post by tom balabanov
are no longer essential for most problems. Designs that are a
lot
Post by S. Vetter
Post by tom balabanov
less clever suffice most of the time. Due to this change,
companies
Post by S. Vetter
Post by tom balabanov
that make software are unwilling to expend resources
making it
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
highly efficient, scalable, or whatever. Why should they?
On the other hand, it's no less expensive for the
software to
Post by Gregg C Levine
fail
Post by S. Vetter
Post by tom balabanov
than before. A simple application running on a million desktop
machines still costs hugely in lost data or productivity if it
fails
Post by S. Vetter
Post by tom balabanov
routinely, due to the fact that it is failing on a millon
machines
Post by S. Vetter
Post by tom balabanov
instead of just one (though the cost of each single
failure
Post by Gregg C Levine
may
be
Post by S. Vetter
Post by tom balabanov
much, much lower than the cost of a single failure in the
mainframe
Post by S. Vetter
Post by tom balabanov
days).
support.
It's
Post by S. Vetter
Post by tom balabanov
much more costly to configure, maintain, and support
thousands
Post by Gregg C Levine
of
Post by S. Vetter
Post by tom balabanov
desktop boxes than one mainframe. One of the costs of buggy
software
Post by S. Vetter
Post by tom balabanov
is support, and it can be very high due to the fact that
it
Post by Gregg C Levine
must
Post by S. Vetter
Post by tom balabanov
be "fixed" over and over again, in many different
physical
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
locations.
Due to support costs, trends in system design now favor
centralizing
Post by S. Vetter
Post by tom balabanov
critical functions in client/server configurations. Once you
move to
Post by S. Vetter
Post by tom balabanov
a single server serving thousands of users, we are right
back
Post by Gregg C Levine
to
the
Post by S. Vetter
Post by tom balabanov
need for reliability, scalability, and availability we used to
have,
Post by S. Vetter
Post by tom balabanov
only there is much more hardware available more cheaply
now,
Post by Gregg C Levine
so
we
Post by S. Vetter
Post by tom balabanov
still don't need the kind of efficient use of hardware
resources
we
Post by S. Vetter
Post by tom balabanov
once did. At any rate, the place of the mainframe in the
modern
Post by S. Vetter
Post by tom balabanov
world is more often as a server than as a host (with a
few
Post by Gregg C Levine
well
Post by S. Vetter
Post by tom balabanov
known exceptions). I would contend that it is still one of the
best
Post by S. Vetter
Post by tom balabanov
server systems in existence, though its high cost of both
procurement and operation mean its only suitable for the large
enterprise. No surprise that this is exactly how IBM is
positioning
Post by S. Vetter
Post by tom balabanov
it these days.
It's worth raising the question of whether the way we write
software
Post by S. Vetter
Post by tom balabanov
now is better than the way we wrote software back when
computing
was
Post by S. Vetter
Post by tom balabanov
much more expensive.
Many people (myself included) casually (unscientifically)
believe
Post by S. Vetter
Post by tom balabanov
that it is actually less expensive to follow a cycle of
design,
then
Post by S. Vetter
Post by tom balabanov
validation, then lower level design, then validation, and so
forth
Post by S. Vetter
Post by tom balabanov
before coding, and then validating the code on paper
before
Post by Gregg C Levine
testing
Post by S. Vetter
Post by tom balabanov
begins. This lifecycle model is called the "waterfall"
model
Post by Gregg C Levine
by
Post by S. Vetter
Post by tom balabanov
process experts, and, due in part to the fact that it is
older,
it
Post by S. Vetter
Post by tom balabanov
is not considered to be a "cutting edge" lifecycle model.
Correcting a
Post by Gregg C Levine
problem
Post by S. Vetter
Post by tom balabanov
in a high level design is nearly always much less
expensive
Post by Gregg C Levine
than
Post by S. Vetter
Post by tom balabanov
correcting the same problem in a lower level design.
Correcting
a
Post by S. Vetter
Post by tom balabanov
problem in a design is nearly always much less expensive than
correcting the problem in an implementation of that
design.
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
Carefully considering high level options before taking a
certain
Post by S. Vetter
Post by tom balabanov
direction can dramatically decrease the amount of work
needed
Post by Gregg C Levine
to
Post by S. Vetter
Post by tom balabanov
solve a problem, since some high-level design directions are
much
Post by S. Vetter
Post by tom balabanov
more expensive than others.
All of these realitites would suggest that it is usually
cheaper
to
Post by S. Vetter
Post by tom balabanov
use a traditional design process than it is to just start
coding
and
Post by S. Vetter
Post by tom balabanov
then go through many iterations until the software
system ends
Post by Gregg C Levine
up in
Post by S. Vetter
Post by tom balabanov
its final state.
With respect to qualtiy, as opposed to cost, these
realities
Post by Gregg C Levine
also
Post by S. Vetter
Post by tom balabanov
exist: Research suggests that black box testing will find
about
40%
Post by S. Vetter
Post by tom balabanov
of the flaws in a software system, whereas reading the
code
Post by Gregg C Levine
will
Post by S. Vetter
Post by tom balabanov
find about 90% of them. Furthermore, the bugs found in
black
Post by Gregg C Levine
box
Post by S. Vetter
Post by tom balabanov
testing will tend to be different kinds of bugs than
those
Post by Gregg C Levine
found
in
Post by S. Vetter
Post by tom balabanov
inspection, meaning that using both methods makes it
easy to
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
approach 100% of the bugs in the system. Also, carefully
considered
Post by S. Vetter
Post by tom balabanov
high level design decisions lead to simpler solutions.
Understanding
Post by S. Vetter
Post by tom balabanov
the design thoroughly before coding leads to better
organized
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
implementations (well designed module boundaries, etc.), which
not
Post by S. Vetter
Post by tom balabanov
only lowers maintenance cost, but also results in
systems that
Post by Gregg C Levine
are
Post by S. Vetter
Post by tom balabanov
both easier to validate, and, generally, more valid from the
start.
Post by S. Vetter
Post by tom balabanov
All of these realities would suggest that software
designed
Post by Gregg C Levine
using
Post by S. Vetter
Post by tom balabanov
traditional methods would usually have higher quality than
software
Post by S. Vetter
Post by tom balabanov
done using "Extreme Software Engineering" type lifecycle
models.
Post by S. Vetter
Post by tom balabanov
Antecdotal evidence supports these observations.
Many "old
Post by Gregg C Levine
timers"
Post by S. Vetter
Post by tom balabanov
can tell stories about how teams performed what, by
today's
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
standards, would be considered extraordinary feats of software
engineering very quickly, and with very high quality. This is
partially due to the methodologies, and partially due to the
fact
Post by S. Vetter
Post by tom balabanov
that those older systems (e.g. IBM mainframes) provided much
more
Post by S. Vetter
Post by tom balabanov
enlightened system interfaces and programming support
than
Post by Gregg C Levine
newer
Post by S. Vetter
Post by tom balabanov
systems. The latter is, of course, due to economic
factors.
Post by Gregg C Levine
When
you
Post by S. Vetter
Post by tom balabanov
can charge millions for each system sold, a lot more
resources
Post by Gregg C Levine
can
Post by S. Vetter
Post by tom balabanov
be expended when designing it.
Still, since we think it's actually both better AND
cheaper to
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
design things on paper, we should still be able to
realize
Post by Gregg C Levine
both
Post by S. Vetter
Post by tom balabanov
gains in development efficiency AND quality when
designing and
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
implementing modern systems to run on modern system
architectures.
Post by S. Vetter
Post by tom balabanov
For documentation supporting these contentions, see the books
"Rapid
Post by S. Vetter
Post by tom balabanov
Development--taming wild software schedules," and "Code
Complete,"
Post by S. Vetter
Post by tom balabanov
both by Steve McConnell, as well as "Writing Solid
Code," by
Post by Gregg C Levine
Steve
Post by S. Vetter
Post by tom balabanov
Maguire.
The main reason people like myself complain about modern
practices
Post by S. Vetter
Post by tom balabanov
isn't because we're nostalgic, but because modern
economic
Post by Gregg C Levine
realities
Post by S. Vetter
Post by tom balabanov
(which are great, we all agree), have the unfortunate side
effect of
Post by S. Vetter
Post by tom balabanov
not encouraging careful design as much as prior economic
realities
Post by S. Vetter
Post by tom balabanov
did.
Regards,
--Dan
Post by John Alvord
On Sat, Oct 26, 2002 at 07:41:36PM +0000,
marysmiling2002
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
I find a common method used by many young
programmers
Post by Gregg C Levine
nowadays
Post by S. Vetter
Post by tom balabanov
is to
Post by John Alvord
Post by marysmiling2002
blast out some code as quickly as they can type, compile
it
Post by S. Vetter
Post by tom balabanov
and fix
Post by John Alvord
Post by marysmiling2002
errors until it compiles cleanly, and then start running
it to
Post by S. Vetter
Post by tom balabanov
see
Post by John Alvord
Post by marysmiling2002
if it works. There is little to no human
validation of
Post by Gregg C Levine
the
Post by S. Vetter
Post by tom balabanov
code or
Post by John Alvord
Post by marysmiling2002
logic.
Even more sadly, this modern approach seems to
discourage
Post by S. Vetter
Post by tom balabanov
design, in
Post by John Alvord
Post by marysmiling2002
the sense that it is so cheap and easy to compile
and
Post by Gregg C Levine
run
Post by S. Vetter
Post by tom balabanov
programs
Post by John Alvord
Post by marysmiling2002
now that it requires a lot of discipline on the part of
the
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
programmer to do designs on paper and put in design
validation
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
effort.
I think you've just described "Extreme Programming."
Bah.
Kids these days.
The factor you may not have considered fully is the dramatic
reduction in
Post by John Alvord
cost of computing. Compared to (say) 1970, the $ cost
of
Post by Gregg C Levine
human
Post by S. Vetter
Post by tom balabanov
work has
Post by John Alvord
gone up maybe 4 times and the cost of computation has been
reduced
Post by S. Vetter
Post by tom balabanov
by
Post by John Alvord
(say) 10,000. The obvious compensating strategy is to
lean
Post by Gregg C Levine
on
the
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
computing side - assuming that minimizing costs is the goal.
If
Post by S. Vetter
Post by tom balabanov
you are
Post by John Alvord
after some other - purity/nostalgic - goal, then cost isn't
part
Post by S. Vetter
Post by tom balabanov
of the
Post by John Alvord
equation...
john alvord
Yahoo! Groups Sponsor
ADVERTISEMENT
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to the Yahoo! Terms of
Service.
Post by S. Vetter
Post by tom balabanov
[Non-text portions of this message have been removed]
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
Post by S. Vetter
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Gregg C Levine
2002-10-29 04:15:53 UTC
Permalink
Hello from Gregg C Levine
Darned if I know. All I know, is what I've stated. I do know, that the
systems that the crumbling ATC systems are, happen to be extremely old.
Probably early S/390 systems, I'd need the specifics before I can
comment any further on that issue.
-------------------
Gregg C Levine hansolofalcon-XfrvlLN1Pqtfpb/***@public.gmane.org
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke."  Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Monday, October 28, 2002 11:08 PM
Subject: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
What an interesting thread!
Speaking of the 370/390 in aerospace applications, I read somewhere
that the air traffic control system is still all 360 assembly
language, but it runs on S/390 hardware these days. Anyone know if
that's true?
Somewhere I read that there was a movement afoot to rewrite it in
ADA that failed.
--Dan
Post by Gregg C Levine
Hello again from Gregg C Levine
Good so far. Yes, the Galileo is wearing a cluster of RCA
CDP1802s, and
Post by Gregg C Levine
they are the rad hard variant. The Shuttle? Three of them are
wearing
Post by Gregg C Levine
processors based on the ones that we model inside Hercules. I
don't know
Post by Gregg C Levine
what the Endeavor is wearing. What makes the existence of Hercules
important, is that it's bringing people like us together. And I
agree
Post by Gregg C Levine
with you, regarding the thoughts you've espoused so far.
-------------------
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke."  Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Monday, October 28, 2002 8:23 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
Gregg,
I'm not sure what is on board the Galileo, but I wouldn't be
surprised if
it's something very similar to the rather obscure RCA-
manufactured
Post by Gregg C Levine
microprocessor core on the Voyagers. A little less computing
power
Post by Gregg C Levine
than a
decent handheld calculator; reason being that the poor thing had
to be
Post by Gregg C Levine
certified to survive in a really hostile radiation environment,
which
Post by Gregg C Levine
takes
a lot of time to do as well as being *very* expensive for a
really
Post by Gregg C Levine
tiny
market.
Don't know what the shuttles have on board now, but one of
the
Post by Gregg C Levine
main
systems in the early days was the same basic machine that was
used on
Post by Gregg C Levine
Apollo
and was the weapons controller and nav computer on the F111: the
IBM 4
Post by Gregg C Levine
pi
mini. I do know that (this was ca. 1975) the folks down at Clear
Lake
Post by Gregg C Levine
that
worked for GE and had to do something with the telemetry systems
didn't have
much good to say about IBM's folks when it came to getting clear
specs
Post by Gregg C Levine
on
what the data streams would look like. I don't know whether the
IBMers
Post by Gregg C Levine
didn't really understand what the importancve was or if they
were sort
Post by Gregg C Levine
of
unofficially trying to sandbag the competition/other contractors
on
Post by Gregg C Levine
the
Shuttle project.
Personally, I don't have much in the way of "good thoughts"
about
Post by Gregg C Levine
the
STS; for scientific worth unmanned missions have a distinct
advantage.
Post by Gregg C Levine
Besides that, the Shuttle came in way over budget and under in
lift
Post by Gregg C Levine
capability so that Galileo had to be substantially modified and
limited
because it had to go off on the shuttle instead of Atlas/Centaur
like
Post by Gregg C Levine
the
Voyagers. There was also a major negative impact on the
International
Post by Gregg C Levine
Solar
Polar that had been planned as a joint project between NASA and
ESA.
Post by Gregg C Levine
But NASA, at least that part at JPL and the folks running
DSN,
Post by Gregg C Levine
were/are
dedicated and sharp. The TPP I worked on was a bad example or a
great
Post by Gregg C Levine
one
when considering how *not* to build and maintain major software
systems,
depending on what one is trying to argue. It was a little
awesome to
Post by Gregg C Levine
sit in
the control booth in MCCC and watch, in real time, the pictures
being
Post by Gregg C Levine
built,
one scan line at a time, on the terminals as Voyagers got nearer
and
Post by Gregg C Levine
nearer
to Jupiter and those truly strange moons such as Io. Pity the
budget
Post by Gregg C Levine
problems disrupted so many careers at JPL.
Regards,
Ron T.
----- Original Message -----
Sent: Monday, October 28, 2002 12:04 PM
Subject: RE: [hercules-390] Re: Running a mainframe at home is a
interesting
IBM needs to listen
Hello again from Gregg C Levine
**Grins at his collection of things from that area of work.**
Ron, you
Post by Gregg C Levine
just said what I wanted to say. Good!!!! But do you know what the
Galileo spacecraft is using for its systems? Or the Space
Shuttle?
Post by Gregg C Levine
Would
anyone be surprised if I told folk what's really running the
Space
Post by Gregg C Levine
Shuttle?
-------------------
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke." Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Monday, October 28, 2002 12:57 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is
a
Post by Gregg C Levine
interesting IBM
needs to listen
NASA is different, apparently, depending on where you look. I
worked
Post by Gregg C Levine
on the
telemetry preprocessor for the Voyager project (I even have a
framed
Post by Gregg C Levine
certificate saying I was awarded the Public Service Group
Achievement
Award
for Voyager Ground Data System Development and Operations) at
JPL.
Post by Gregg C Levine
The machine was a highly modified Univac MTC (essentially,
they
Post by Gregg C Levine
sort of
chopped the MTC in two so they had a string of Univac 1530
uniprocessors)
they somehow acquired from the US Navy. The programming
language
Post by Gregg C Levine
(and
I'm
using that term *very* loosely) was an assembler that ran on a
Moccomp
IV.
The assmebler was generated by a company in Pasadena (no, I'll
not
Post by Gregg C Levine
name it -
what I have to say is not a libel, but I do fear being sued for
slander) and
the stupid thing couldn't generate all the instructions the
MTC/1530
Post by Gregg C Levine
had. As
a result, there were *lots* of octal constants in the source
language
files.
As problems/bugs were found, we had to make fixes (duh)
and
Post by Gregg C Levine
test
them
on either real spacecraft data or synthetic data streams. We'd
load
Post by Gregg C Levine
out
standalone system from tape (no, there wasn't what you'd call
an OS
Post by Gregg C Levine
with
apps runnning under it - the whole thing was one giant slobber
of
Post by Gregg C Levine
code) and
load our octal patch decks in. For "efficiency", each card
held an
Post by Gregg C Levine
address
for the three octal instructions (max) on that card. I found
out
Post by Gregg C Levine
that
I
could put just *one* patch on a card and key in the assembler
source
Post by Gregg C Levine
after
it as sort of a comment. I was told in no uncertain terms that
I was
Post by Gregg C Levine
not to
do such a thing; so about once a month we collected all out
damned
Post by Gregg C Levine
patches
and hand-backtranslated the octal to assembler source
(hopefully we
Post by Gregg C Levine
hadn't
lost our minds yet) and reassembled a new version of the TPP.
There
Post by Gregg C Levine
were
three machine strings: one for each of the two Voyager
spacecraft
Post by Gregg C Levine
and
the
third for a hot backup and program development/testing.
Productivity? Efficiency? Whuzzat? I don't know what my
time
Post by Gregg C Levine
was
billed
at, but I was paid fairly well. As a contract programmer, I
loved
Post by Gregg C Levine
what
NASA/JPL was doing to me as a taxpayer ....Sort of like what a
fairly
good
computer scientist said to an IBM executive at a SHARE
conference -
Post by Gregg C Levine
"As a
shareholder, I love how you're screwing me as a customer." And
since
Post by Gregg C Levine
his
budget was a few tens of thousands per month, the IBM guy
didn't do
Post by Gregg C Levine
anything
except flash a really insincere smile and quickly move off to
another
group.
Regards,
Ron Tatum
BTW, I knew I was in deep stuff when I was working on a
manufacturing
information system for Systems Manufacturing
Division/Components/World
Trade
Corporation and showed the senior CE at the site a message I
got
Post by Gregg C Levine
from
either
OS/360 Rel 13 or Rel 15/16 that indicated I should raise an
APAR
Post by Gregg C Levine
with
my IBM
CE and the old f..t asked me "What's an APAR?" Ah, yes, the
cobbler's
children syndrome.
----- Original Message -----
Sent: Sunday, October 27, 2002 11:24 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is
a
Post by Gregg C Levine
interesting
IBM needs to listen
Post by S. Vetter
(biting my tongue to a great extent). You are probably
correct
Post by Gregg C Levine
in
the
The
Post by Gregg C Levine
beta
tester's have to
Post by S. Vetter
PAY for being a tester and to report back the bugs found.
Sort of
Post by Gregg C Levine
like
saying "Hey! I want to be a crash test dummy. And I'll even
pay to
Post by Gregg C Levine
run the
risk of being
Post by S. Vetter
killed or maimed for life!"
The military I heard also behaves like NASA, they don't
use up
Post by Gregg C Levine
to
date
software either.
Post by S. Vetter
Scott
----------
Post by tom balabanov
one should realize that most of the modern software
companies
Post by Gregg C Levine
don't have
to go through all the exhaustive testing. This expense is
passed on
Post by Gregg C Levine
to
the
consumer.
Post by S. Vetter
Post by tom balabanov
Microsoft et al. are pushing code out ,getting the major
bugs
Post by Gregg C Levine
out and
relying on alpha and beta sites to get the real bugs out,
Post by S. Vetter
Post by tom balabanov
that I think is why they don't spend as much time up
front on
Post by Gregg C Levine
the
design, they don't have to pay for the debugging so why should
they
Post by Gregg C Levine
rigoriously design.
Post by S. Vetter
Post by tom balabanov
That is why NASA spends so much up front,and they don't
use
Post by Gregg C Levine
leading
edge software
Post by S. Vetter
Post by tom balabanov
----- Original Message -----
From: marysmiling2002
Sent: Sunday, October 27, 2002 12:38 PM
Subject: [hercules-390] Re: Running a mainframe at home
is a
Post by Gregg C Levine
interesting IBM needs to listen
Post by S. Vetter
Post by tom balabanov
Hi John,
Not sure if I'm reading your post right.
Did you mean to say that it is less expensive to
construct
Post by Gregg C Levine
software
Post by S. Vetter
Post by tom balabanov
without designing it on paper first, or performing the
other
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
validation steps in the classic process?
Or did you mean that the lower quality of modern
software vs.
Post by Gregg C Levine
older
Post by S. Vetter
Post by tom balabanov
software is less costly now than it used to be due to
the fact
Post by Gregg C Levine
that
Post by S. Vetter
Post by tom balabanov
computing resources are less expensive?
It seems to me there are two main themes to this
Because of the change in cost factors, it is now much
cheaper
Post by Gregg C Levine
to
Post by S. Vetter
Post by tom balabanov
throw hardware at a problem than software. Because of
that,
Post by Gregg C Levine
highly
Post by S. Vetter
Post by tom balabanov
efficient systems like the S/360 architecture and its
descendants
Post by S. Vetter
Post by tom balabanov
are no longer essential for most problems. Designs that
are a
Post by Gregg C Levine
lot
Post by S. Vetter
Post by tom balabanov
less clever suffice most of the time. Due to this change,
companies
Post by S. Vetter
Post by tom balabanov
that make software are unwilling to expend resources
making it
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
highly efficient, scalable, or whatever. Why should they?
On the other hand, it's no less expensive for the
software to
Post by Gregg C Levine
fail
Post by S. Vetter
Post by tom balabanov
than before. A simple application running on a million
desktop
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
machines still costs hugely in lost data or productivity
if it
Post by Gregg C Levine
fails
Post by S. Vetter
Post by tom balabanov
routinely, due to the fact that it is failing on a millon
machines
Post by S. Vetter
Post by tom balabanov
instead of just one (though the cost of each single
failure
Post by Gregg C Levine
may
be
Post by S. Vetter
Post by tom balabanov
much, much lower than the cost of a single failure in the
mainframe
Post by S. Vetter
Post by tom balabanov
days).
support.
It's
Post by S. Vetter
Post by tom balabanov
much more costly to configure, maintain, and support
thousands
Post by Gregg C Levine
of
Post by S. Vetter
Post by tom balabanov
desktop boxes than one mainframe. One of the costs of
buggy
Post by Gregg C Levine
software
Post by S. Vetter
Post by tom balabanov
is support, and it can be very high due to the fact that
it
Post by Gregg C Levine
must
Post by S. Vetter
Post by tom balabanov
be "fixed" over and over again, in many different
physical
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
locations.
Due to support costs, trends in system design now favor
centralizing
Post by S. Vetter
Post by tom balabanov
critical functions in client/server configurations. Once
you
Post by Gregg C Levine
move to
Post by S. Vetter
Post by tom balabanov
a single server serving thousands of users, we are right
back
Post by Gregg C Levine
to
the
Post by S. Vetter
Post by tom balabanov
need for reliability, scalability, and availability we
used to
Post by Gregg C Levine
have,
Post by S. Vetter
Post by tom balabanov
only there is much more hardware available more cheaply
now,
Post by Gregg C Levine
so
we
Post by S. Vetter
Post by tom balabanov
still don't need the kind of efficient use of hardware
resources
we
Post by S. Vetter
Post by tom balabanov
once did. At any rate, the place of the mainframe in the
modern
Post by S. Vetter
Post by tom balabanov
world is more often as a server than as a host (with a
few
Post by Gregg C Levine
well
Post by S. Vetter
Post by tom balabanov
known exceptions). I would contend that it is still one
of the
Post by Gregg C Levine
best
Post by S. Vetter
Post by tom balabanov
server systems in existence, though its high cost of both
procurement and operation mean its only suitable for the
large
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
enterprise. No surprise that this is exactly how IBM is
positioning
Post by S. Vetter
Post by tom balabanov
it these days.
It's worth raising the question of whether the way we
write
Post by Gregg C Levine
software
Post by S. Vetter
Post by tom balabanov
now is better than the way we wrote software back when
computing
was
Post by S. Vetter
Post by tom balabanov
much more expensive.
Many people (myself included) casually (unscientifically)
believe
Post by S. Vetter
Post by tom balabanov
that it is actually less expensive to follow a cycle of
design,
then
Post by S. Vetter
Post by tom balabanov
validation, then lower level design, then validation,
and so
Post by Gregg C Levine
forth
Post by S. Vetter
Post by tom balabanov
before coding, and then validating the code on paper
before
Post by Gregg C Levine
testing
Post by S. Vetter
Post by tom balabanov
begins. This lifecycle model is called the "waterfall"
model
Post by Gregg C Levine
by
Post by S. Vetter
Post by tom balabanov
process experts, and, due in part to the fact that it is
older,
it
Post by S. Vetter
Post by tom balabanov
is not considered to be a "cutting edge" lifecycle model.
Correcting a
Post by Gregg C Levine
problem
Post by S. Vetter
Post by tom balabanov
in a high level design is nearly always much less
expensive
Post by Gregg C Levine
than
Post by S. Vetter
Post by tom balabanov
correcting the same problem in a lower level design.
Correcting
a
Post by S. Vetter
Post by tom balabanov
problem in a design is nearly always much less expensive
than
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
correcting the problem in an implementation of that
design.
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
Carefully considering high level options before taking a
certain
Post by S. Vetter
Post by tom balabanov
direction can dramatically decrease the amount of work
needed
Post by Gregg C Levine
to
Post by S. Vetter
Post by tom balabanov
solve a problem, since some high-level design directions
are
Post by Gregg C Levine
much
Post by S. Vetter
Post by tom balabanov
more expensive than others.
All of these realitites would suggest that it is usually
cheaper
to
Post by S. Vetter
Post by tom balabanov
use a traditional design process than it is to just start
coding
and
Post by S. Vetter
Post by tom balabanov
then go through many iterations until the software
system ends
Post by Gregg C Levine
up in
Post by S. Vetter
Post by tom balabanov
its final state.
With respect to qualtiy, as opposed to cost, these
realities
Post by Gregg C Levine
also
Post by S. Vetter
Post by tom balabanov
exist: Research suggests that black box testing will find
about
40%
Post by S. Vetter
Post by tom balabanov
of the flaws in a software system, whereas reading the
code
Post by Gregg C Levine
will
Post by S. Vetter
Post by tom balabanov
find about 90% of them. Furthermore, the bugs found in
black
Post by Gregg C Levine
box
Post by S. Vetter
Post by tom balabanov
testing will tend to be different kinds of bugs than
those
Post by Gregg C Levine
found
in
Post by S. Vetter
Post by tom balabanov
inspection, meaning that using both methods makes it
easy to
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
approach 100% of the bugs in the system. Also, carefully
considered
Post by S. Vetter
Post by tom balabanov
high level design decisions lead to simpler solutions.
Understanding
Post by S. Vetter
Post by tom balabanov
the design thoroughly before coding leads to better
organized
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
implementations (well designed module boundaries, etc.),
which
Post by Gregg C Levine
not
Post by S. Vetter
Post by tom balabanov
only lowers maintenance cost, but also results in
systems that
Post by Gregg C Levine
are
Post by S. Vetter
Post by tom balabanov
both easier to validate, and, generally, more valid from
the
Post by Gregg C Levine
start.
Post by S. Vetter
Post by tom balabanov
All of these realities would suggest that software
designed
Post by Gregg C Levine
using
Post by S. Vetter
Post by tom balabanov
traditional methods would usually have higher quality
than
Post by Gregg C Levine
software
Post by S. Vetter
Post by tom balabanov
done using "Extreme Software Engineering" type lifecycle
models.
Post by S. Vetter
Post by tom balabanov
Antecdotal evidence supports these observations.
Many "old
Post by Gregg C Levine
timers"
Post by S. Vetter
Post by tom balabanov
can tell stories about how teams performed what, by
today's
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
standards, would be considered extraordinary feats of
software
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
engineering very quickly, and with very high quality.
This is
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
partially due to the methodologies, and partially due to
the
Post by Gregg C Levine
fact
Post by S. Vetter
Post by tom balabanov
that those older systems (e.g. IBM mainframes) provided
much
Post by Gregg C Levine
more
Post by S. Vetter
Post by tom balabanov
enlightened system interfaces and programming support
than
Post by Gregg C Levine
newer
Post by S. Vetter
Post by tom balabanov
systems. The latter is, of course, due to economic
factors.
Post by Gregg C Levine
When
you
Post by S. Vetter
Post by tom balabanov
can charge millions for each system sold, a lot more
resources
Post by Gregg C Levine
can
Post by S. Vetter
Post by tom balabanov
be expended when designing it.
Still, since we think it's actually both better AND
cheaper to
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
design things on paper, we should still be able to
realize
Post by Gregg C Levine
both
Post by S. Vetter
Post by tom balabanov
gains in development efficiency AND quality when
designing and
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
implementing modern systems to run on modern system
architectures.
Post by S. Vetter
Post by tom balabanov
For documentation supporting these contentions, see the
books
Post by Gregg C Levine
"Rapid
Post by S. Vetter
Post by tom balabanov
Development--taming wild software schedules," and "Code
Complete,"
Post by S. Vetter
Post by tom balabanov
both by Steve McConnell, as well as "Writing Solid
Code," by
Post by Gregg C Levine
Steve
Post by S. Vetter
Post by tom balabanov
Maguire.
The main reason people like myself complain about modern
practices
Post by S. Vetter
Post by tom balabanov
isn't because we're nostalgic, but because modern
economic
Post by Gregg C Levine
realities
Post by S. Vetter
Post by tom balabanov
(which are great, we all agree), have the unfortunate
side
Post by Gregg C Levine
effect of
Post by S. Vetter
Post by tom balabanov
not encouraging careful design as much as prior economic
realities
Post by S. Vetter
Post by tom balabanov
did.
Regards,
--Dan
Post by John Alvord
On Sat, Oct 26, 2002 at 07:41:36PM +0000,
marysmiling2002
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
I find a common method used by many young
programmers
Post by Gregg C Levine
nowadays
Post by S. Vetter
Post by tom balabanov
is to
Post by John Alvord
Post by marysmiling2002
blast out some code as quickly as they can type,
compile
Post by Gregg C Levine
it
Post by S. Vetter
Post by tom balabanov
and fix
Post by John Alvord
Post by marysmiling2002
errors until it compiles cleanly, and then start
running
Post by Gregg C Levine
it to
Post by S. Vetter
Post by tom balabanov
see
Post by John Alvord
Post by marysmiling2002
if it works. There is little to no human
validation of
Post by Gregg C Levine
the
Post by S. Vetter
Post by tom balabanov
code or
Post by John Alvord
Post by marysmiling2002
logic.
Even more sadly, this modern approach seems to
discourage
Post by S. Vetter
Post by tom balabanov
design, in
Post by John Alvord
Post by marysmiling2002
the sense that it is so cheap and easy to compile
and
Post by Gregg C Levine
run
Post by S. Vetter
Post by tom balabanov
programs
Post by John Alvord
Post by marysmiling2002
now that it requires a lot of discipline on the
part of
Post by Gregg C Levine
the
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
programmer to do designs on paper and put in design
validation
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
effort.
I think you've just described "Extreme Programming."
Bah.
Kids these days.
The factor you may not have considered fully is the
dramatic
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
reduction in
Post by John Alvord
cost of computing. Compared to (say) 1970, the $ cost
of
Post by Gregg C Levine
human
Post by S. Vetter
Post by tom balabanov
work has
Post by John Alvord
gone up maybe 4 times and the cost of computation has
been
Post by Gregg C Levine
reduced
Post by S. Vetter
Post by tom balabanov
by
Post by John Alvord
(say) 10,000. The obvious compensating strategy is to
lean
Post by Gregg C Levine
on
the
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
computing side - assuming that minimizing costs is the
goal.
Post by Gregg C Levine
If
Post by S. Vetter
Post by tom balabanov
you are
Post by John Alvord
after some other - purity/nostalgic - goal, then cost
isn't
Post by Gregg C Levine
part
Post by S. Vetter
Post by tom balabanov
of the
Post by John Alvord
equation...
john alvord
Yahoo! Groups Sponsor
ADVERTISEMENT
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to the Yahoo! Terms
of
Post by Gregg C Levine
Service.
Post by S. Vetter
Post by tom balabanov
[Non-text portions of this message have been removed]
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
Post by S. Vetter
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/



------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
Ronald Tatum
2002-10-29 15:49:23 UTC
Permalink
Gregg,

I don't know what all the FAA's ATC system uses, but dating back to
1967-1969, the Custom Systems group in Poughkeepsie built a bunch of 9020s
(9010s???) and 9030s by modifying 360/50s and 360/65s to be the mainstays of
the system. Took radar and other information streams and drove the displays
in the various ATC cneters around the country.
I seem to recall that the system was core-resident, loaded from tape, no
disks.

Yes, it was all done in asembler (why they didn't use BSL/PLS/whatever
it's called now, I don't know). Around 1976/77, there were some Raytheon
minis (9000 something? 1200 something? Don't know) at least at Houston
Center according to a pilot friend who toured the facility. It wasn't too
clear just where they fitted into the system. Getting spare parts, or for
that matter finding CE/SE folk to do maintenance on the old 360s is
obviously a problem.

There have probably been some efforts to use later hardware, but some of
the internal mods to the machines, as well as the special interfaces seems
to be a problem. In any case, the FAA has blown a few billion with nothing
to show for the money in several planned modernization plans. Just
certifying and installing more modern radars is a real messy job.

Ada? Knowing the early history (it started out to be something called
DoD-1, for Department of Defense Language 1, which apparently wound up being
a congressional commitee designed by Bactrian camels) of Ada, I shouldn't be
surprised if there was some effort made which probably failed or was
abandoned.

Regards,
Ron T.
----- Original Message -----
From: "Gregg C Levine" <hansolofalcon-XfrvlLN1Pqtfpb/***@public.gmane.org>
To: <hercules-390-***@public.gmane.org>
Sent: Monday, October 28, 2002 10:15 PM
Subject: RE: [hercules-390] Re: Running a mainframe at home is a interesting
IBM needs to listen


Hello from Gregg C Levine
Darned if I know. All I know, is what I've stated. I do know, that the
systems that the crumbling ATC systems are, happen to be extremely old.
Probably early S/390 systems, I'd need the specifics before I can
comment any further on that issue.
-------------------
Gregg C Levine hansolofalcon-XfrvlLN1Pqtfpb/***@public.gmane.org
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke." Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Monday, October 28, 2002 11:08 PM
Subject: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
What an interesting thread!
Speaking of the 370/390 in aerospace applications, I read somewhere
that the air traffic control system is still all 360 assembly
language, but it runs on S/390 hardware these days. Anyone know if
that's true?
Somewhere I read that there was a movement afoot to rewrite it in
ADA that failed.
--Dan
Post by Gregg C Levine
Hello again from Gregg C Levine
Good so far. Yes, the Galileo is wearing a cluster of RCA
CDP1802s, and
Post by Gregg C Levine
they are the rad hard variant. The Shuttle? Three of them are
wearing
Post by Gregg C Levine
processors based on the ones that we model inside Hercules. I
don't know
Post by Gregg C Levine
what the Endeavor is wearing. What makes the existence of Hercules
important, is that it's bringing people like us together. And I
agree
Post by Gregg C Levine
with you, regarding the thoughts you've espoused so far.
-------------------
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke." Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Monday, October 28, 2002 8:23 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
Gregg,
I'm not sure what is on board the Galileo, but I wouldn't be
surprised if
it's something very similar to the rather obscure RCA-
manufactured
Post by Gregg C Levine
microprocessor core on the Voyagers. A little less computing
power
Post by Gregg C Levine
than a
decent handheld calculator; reason being that the poor thing had
to be
Post by Gregg C Levine
certified to survive in a really hostile radiation environment,
which
Post by Gregg C Levine
takes
a lot of time to do as well as being *very* expensive for a
really
Post by Gregg C Levine
tiny
market.
Don't know what the shuttles have on board now, but one of
the
Post by Gregg C Levine
main
systems in the early days was the same basic machine that was
used on
Post by Gregg C Levine
Apollo
and was the weapons controller and nav computer on the F111: the
IBM 4
Post by Gregg C Levine
pi
mini. I do know that (this was ca. 1975) the folks down at Clear
Lake
Post by Gregg C Levine
that
worked for GE and had to do something with the telemetry systems
didn't have
much good to say about IBM's folks when it came to getting clear
specs
Post by Gregg C Levine
on
what the data streams would look like. I don't know whether the
IBMers
Post by Gregg C Levine
didn't really understand what the importancve was or if they
were sort
Post by Gregg C Levine
of
unofficially trying to sandbag the competition/other contractors
on
Post by Gregg C Levine
the
Shuttle project.
Personally, I don't have much in the way of "good thoughts"
about
Post by Gregg C Levine
the
STS; for scientific worth unmanned missions have a distinct
advantage.
Post by Gregg C Levine
Besides that, the Shuttle came in way over budget and under in
lift
Post by Gregg C Levine
capability so that Galileo had to be substantially modified and
limited
because it had to go off on the shuttle instead of Atlas/Centaur
like
Post by Gregg C Levine
the
Voyagers. There was also a major negative impact on the
International
Post by Gregg C Levine
Solar
Polar that had been planned as a joint project between NASA and
ESA.
Post by Gregg C Levine
But NASA, at least that part at JPL and the folks running
DSN,
Post by Gregg C Levine
were/are
dedicated and sharp. The TPP I worked on was a bad example or a
great
Post by Gregg C Levine
one
when considering how *not* to build and maintain major software
systems,
depending on what one is trying to argue. It was a little
awesome to
Post by Gregg C Levine
sit in
the control booth in MCCC and watch, in real time, the pictures
being
Post by Gregg C Levine
built,
one scan line at a time, on the terminals as Voyagers got nearer
and
Post by Gregg C Levine
nearer
to Jupiter and those truly strange moons such as Io. Pity the
budget
Post by Gregg C Levine
problems disrupted so many careers at JPL.
Regards,
Ron T.
----- Original Message -----
Sent: Monday, October 28, 2002 12:04 PM
Subject: RE: [hercules-390] Re: Running a mainframe at home is a
interesting
IBM needs to listen
Hello again from Gregg C Levine
**Grins at his collection of things from that area of work.**
Ron, you
Post by Gregg C Levine
just said what I wanted to say. Good!!!! But do you know what the
Galileo spacecraft is using for its systems? Or the Space
Shuttle?
Post by Gregg C Levine
Would
anyone be surprised if I told folk what's really running the
Space
Post by Gregg C Levine
Shuttle?
-------------------
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke." Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Monday, October 28, 2002 12:57 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is
a
Post by Gregg C Levine
interesting IBM
needs to listen
NASA is different, apparently, depending on where you look. I
worked
Post by Gregg C Levine
on the
telemetry preprocessor for the Voyager project (I even have a
framed
Post by Gregg C Levine
certificate saying I was awarded the Public Service Group
Achievement
Award
for Voyager Ground Data System Development and Operations) at
JPL.
Post by Gregg C Levine
The machine was a highly modified Univac MTC (essentially,
they
Post by Gregg C Levine
sort of
chopped the MTC in two so they had a string of Univac 1530
uniprocessors)
they somehow acquired from the US Navy. The programming
language
Post by Gregg C Levine
(and
I'm
using that term *very* loosely) was an assembler that ran on a
Moccomp
IV.
The assmebler was generated by a company in Pasadena (no, I'll
not
Post by Gregg C Levine
name it -
what I have to say is not a libel, but I do fear being sued for
slander) and
the stupid thing couldn't generate all the instructions the
MTC/1530
Post by Gregg C Levine
had. As
a result, there were *lots* of octal constants in the source
language
files.
As problems/bugs were found, we had to make fixes (duh)
and
Post by Gregg C Levine
test
them
on either real spacecraft data or synthetic data streams. We'd
load
Post by Gregg C Levine
out
standalone system from tape (no, there wasn't what you'd call
an OS
Post by Gregg C Levine
with
apps runnning under it - the whole thing was one giant slobber
of
Post by Gregg C Levine
code) and
load our octal patch decks in. For "efficiency", each card
held an
Post by Gregg C Levine
address
for the three octal instructions (max) on that card. I found
out
Post by Gregg C Levine
that
I
could put just *one* patch on a card and key in the assembler
source
Post by Gregg C Levine
after
it as sort of a comment. I was told in no uncertain terms that
I was
Post by Gregg C Levine
not to
do such a thing; so about once a month we collected all out
damned
Post by Gregg C Levine
patches
and hand-backtranslated the octal to assembler source
(hopefully we
Post by Gregg C Levine
hadn't
lost our minds yet) and reassembled a new version of the TPP.
There
Post by Gregg C Levine
were
three machine strings: one for each of the two Voyager
spacecraft
Post by Gregg C Levine
and
the
third for a hot backup and program development/testing.
Productivity? Efficiency? Whuzzat? I don't know what my
time
Post by Gregg C Levine
was
billed
at, but I was paid fairly well. As a contract programmer, I
loved
Post by Gregg C Levine
what
NASA/JPL was doing to me as a taxpayer ....Sort of like what a
fairly
good
computer scientist said to an IBM executive at a SHARE
conference -
Post by Gregg C Levine
"As a
shareholder, I love how you're screwing me as a customer." And
since
Post by Gregg C Levine
his
budget was a few tens of thousands per month, the IBM guy
didn't do
Post by Gregg C Levine
anything
except flash a really insincere smile and quickly move off to
another
group.
Regards,
Ron Tatum
BTW, I knew I was in deep stuff when I was working on a
manufacturing
information system for Systems Manufacturing
Division/Components/World
Trade
Corporation and showed the senior CE at the site a message I
got
Post by Gregg C Levine
from
either
OS/360 Rel 13 or Rel 15/16 that indicated I should raise an
APAR
Post by Gregg C Levine
with
my IBM
CE and the old f..t asked me "What's an APAR?" Ah, yes, the
cobbler's
children syndrome.
----- Original Message -----
Sent: Sunday, October 27, 2002 11:24 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is
a
Post by Gregg C Levine
interesting
IBM needs to listen
Post by S. Vetter
(biting my tongue to a great extent). You are probably
correct
Post by Gregg C Levine
in
the
The
Post by Gregg C Levine
beta
tester's have to
Post by S. Vetter
PAY for being a tester and to report back the bugs found.
Sort of
Post by Gregg C Levine
like
saying "Hey! I want to be a crash test dummy. And I'll even
pay to
Post by Gregg C Levine
run the
risk of being
Post by S. Vetter
killed or maimed for life!"
The military I heard also behaves like NASA, they don't
use up
Post by Gregg C Levine
to
date
software either.
Post by S. Vetter
Scott
----------
Post by tom balabanov
one should realize that most of the modern software
companies
Post by Gregg C Levine
don't have
to go through all the exhaustive testing. This expense is
passed on
Post by Gregg C Levine
to
the
consumer.
Post by S. Vetter
Post by tom balabanov
Microsoft et al. are pushing code out ,getting the major
bugs
Post by Gregg C Levine
out and
relying on alpha and beta sites to get the real bugs out,
Post by S. Vetter
Post by tom balabanov
that I think is why they don't spend as much time up
front on
Post by Gregg C Levine
the
design, they don't have to pay for the debugging so why should
they
Post by Gregg C Levine
rigoriously design.
Post by S. Vetter
Post by tom balabanov
That is why NASA spends so much up front,and they don't
use
Post by Gregg C Levine
leading
edge software
Post by S. Vetter
Post by tom balabanov
----- Original Message -----
From: marysmiling2002
Sent: Sunday, October 27, 2002 12:38 PM
Subject: [hercules-390] Re: Running a mainframe at home
is a
Post by Gregg C Levine
interesting IBM needs to listen
Post by S. Vetter
Post by tom balabanov
Hi John,
Not sure if I'm reading your post right.
Did you mean to say that it is less expensive to
construct
Post by Gregg C Levine
software
Post by S. Vetter
Post by tom balabanov
without designing it on paper first, or performing the
other
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
validation steps in the classic process?
Or did you mean that the lower quality of modern
software vs.
Post by Gregg C Levine
older
Post by S. Vetter
Post by tom balabanov
software is less costly now than it used to be due to
the fact
Post by Gregg C Levine
that
Post by S. Vetter
Post by tom balabanov
computing resources are less expensive?
It seems to me there are two main themes to this
Because of the change in cost factors, it is now much
cheaper
Post by Gregg C Levine
to
Post by S. Vetter
Post by tom balabanov
throw hardware at a problem than software. Because of
that,
Post by Gregg C Levine
highly
Post by S. Vetter
Post by tom balabanov
efficient systems like the S/360 architecture and its
descendants
Post by S. Vetter
Post by tom balabanov
are no longer essential for most problems. Designs that
are a
Post by Gregg C Levine
lot
Post by S. Vetter
Post by tom balabanov
less clever suffice most of the time. Due to this change,
companies
Post by S. Vetter
Post by tom balabanov
that make software are unwilling to expend resources
making it
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
highly efficient, scalable, or whatever. Why should they?
On the other hand, it's no less expensive for the
software to
Post by Gregg C Levine
fail
Post by S. Vetter
Post by tom balabanov
than before. A simple application running on a million
desktop
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
machines still costs hugely in lost data or productivity
if it
Post by Gregg C Levine
fails
Post by S. Vetter
Post by tom balabanov
routinely, due to the fact that it is failing on a millon
machines
Post by S. Vetter
Post by tom balabanov
instead of just one (though the cost of each single
failure
Post by Gregg C Levine
may
be
Post by S. Vetter
Post by tom balabanov
much, much lower than the cost of a single failure in the
mainframe
Post by S. Vetter
Post by tom balabanov
days).
support.
It's
Post by S. Vetter
Post by tom balabanov
much more costly to configure, maintain, and support
thousands
Post by Gregg C Levine
of
Post by S. Vetter
Post by tom balabanov
desktop boxes than one mainframe. One of the costs of
buggy
Post by Gregg C Levine
software
Post by S. Vetter
Post by tom balabanov
is support, and it can be very high due to the fact that
it
Post by Gregg C Levine
must
Post by S. Vetter
Post by tom balabanov
be "fixed" over and over again, in many different
physical
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
locations.
Due to support costs, trends in system design now favor
centralizing
Post by S. Vetter
Post by tom balabanov
critical functions in client/server configurations. Once
you
Post by Gregg C Levine
move to
Post by S. Vetter
Post by tom balabanov
a single server serving thousands of users, we are right
back
Post by Gregg C Levine
to
the
Post by S. Vetter
Post by tom balabanov
need for reliability, scalability, and availability we
used to
Post by Gregg C Levine
have,
Post by S. Vetter
Post by tom balabanov
only there is much more hardware available more cheaply
now,
Post by Gregg C Levine
so
we
Post by S. Vetter
Post by tom balabanov
still don't need the kind of efficient use of hardware
resources
we
Post by S. Vetter
Post by tom balabanov
once did. At any rate, the place of the mainframe in the
modern
Post by S. Vetter
Post by tom balabanov
world is more often as a server than as a host (with a
few
Post by Gregg C Levine
well
Post by S. Vetter
Post by tom balabanov
known exceptions). I would contend that it is still one
of the
Post by Gregg C Levine
best
Post by S. Vetter
Post by tom balabanov
server systems in existence, though its high cost of both
procurement and operation mean its only suitable for the
large
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
enterprise. No surprise that this is exactly how IBM is
positioning
Post by S. Vetter
Post by tom balabanov
it these days.
It's worth raising the question of whether the way we
write
Post by Gregg C Levine
software
Post by S. Vetter
Post by tom balabanov
now is better than the way we wrote software back when
computing
was
Post by S. Vetter
Post by tom balabanov
much more expensive.
Many people (myself included) casually (unscientifically)
believe
Post by S. Vetter
Post by tom balabanov
that it is actually less expensive to follow a cycle of
design,
then
Post by S. Vetter
Post by tom balabanov
validation, then lower level design, then validation,
and so
Post by Gregg C Levine
forth
Post by S. Vetter
Post by tom balabanov
before coding, and then validating the code on paper
before
Post by Gregg C Levine
testing
Post by S. Vetter
Post by tom balabanov
begins. This lifecycle model is called the "waterfall"
model
Post by Gregg C Levine
by
Post by S. Vetter
Post by tom balabanov
process experts, and, due in part to the fact that it is
older,
it
Post by S. Vetter
Post by tom balabanov
is not considered to be a "cutting edge" lifecycle model.
Correcting a
Post by Gregg C Levine
problem
Post by S. Vetter
Post by tom balabanov
in a high level design is nearly always much less
expensive
Post by Gregg C Levine
than
Post by S. Vetter
Post by tom balabanov
correcting the same problem in a lower level design.
Correcting
a
Post by S. Vetter
Post by tom balabanov
problem in a design is nearly always much less expensive
than
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
correcting the problem in an implementation of that
design.
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
Carefully considering high level options before taking a
certain
Post by S. Vetter
Post by tom balabanov
direction can dramatically decrease the amount of work
needed
Post by Gregg C Levine
to
Post by S. Vetter
Post by tom balabanov
solve a problem, since some high-level design directions
are
Post by Gregg C Levine
much
Post by S. Vetter
Post by tom balabanov
more expensive than others.
All of these realitites would suggest that it is usually
cheaper
to
Post by S. Vetter
Post by tom balabanov
use a traditional design process than it is to just start
coding
and
Post by S. Vetter
Post by tom balabanov
then go through many iterations until the software
system ends
Post by Gregg C Levine
up in
Post by S. Vetter
Post by tom balabanov
its final state.
With respect to qualtiy, as opposed to cost, these
realities
Post by Gregg C Levine
also
Post by S. Vetter
Post by tom balabanov
exist: Research suggests that black box testing will find
about
40%
Post by S. Vetter
Post by tom balabanov
of the flaws in a software system, whereas reading the
code
Post by Gregg C Levine
will
Post by S. Vetter
Post by tom balabanov
find about 90% of them. Furthermore, the bugs found in
black
Post by Gregg C Levine
box
Post by S. Vetter
Post by tom balabanov
testing will tend to be different kinds of bugs than
those
Post by Gregg C Levine
found
in
Post by S. Vetter
Post by tom balabanov
inspection, meaning that using both methods makes it
easy to
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
approach 100% of the bugs in the system. Also, carefully
considered
Post by S. Vetter
Post by tom balabanov
high level design decisions lead to simpler solutions.
Understanding
Post by S. Vetter
Post by tom balabanov
the design thoroughly before coding leads to better
organized
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
implementations (well designed module boundaries, etc.),
which
Post by Gregg C Levine
not
Post by S. Vetter
Post by tom balabanov
only lowers maintenance cost, but also results in
systems that
Post by Gregg C Levine
are
Post by S. Vetter
Post by tom balabanov
both easier to validate, and, generally, more valid from
the
Post by Gregg C Levine
start.
Post by S. Vetter
Post by tom balabanov
All of these realities would suggest that software
designed
Post by Gregg C Levine
using
Post by S. Vetter
Post by tom balabanov
traditional methods would usually have higher quality
than
Post by Gregg C Levine
software
Post by S. Vetter
Post by tom balabanov
done using "Extreme Software Engineering" type lifecycle
models.
Post by S. Vetter
Post by tom balabanov
Antecdotal evidence supports these observations.
Many "old
Post by Gregg C Levine
timers"
Post by S. Vetter
Post by tom balabanov
can tell stories about how teams performed what, by
today's
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
standards, would be considered extraordinary feats of
software
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
engineering very quickly, and with very high quality.
This is
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
partially due to the methodologies, and partially due to
the
Post by Gregg C Levine
fact
Post by S. Vetter
Post by tom balabanov
that those older systems (e.g. IBM mainframes) provided
much
Post by Gregg C Levine
more
Post by S. Vetter
Post by tom balabanov
enlightened system interfaces and programming support
than
Post by Gregg C Levine
newer
Post by S. Vetter
Post by tom balabanov
systems. The latter is, of course, due to economic
factors.
Post by Gregg C Levine
When
you
Post by S. Vetter
Post by tom balabanov
can charge millions for each system sold, a lot more
resources
Post by Gregg C Levine
can
Post by S. Vetter
Post by tom balabanov
be expended when designing it.
Still, since we think it's actually both better AND
cheaper to
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
design things on paper, we should still be able to
realize
Post by Gregg C Levine
both
Post by S. Vetter
Post by tom balabanov
gains in development efficiency AND quality when
designing and
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
implementing modern systems to run on modern system
architectures.
Post by S. Vetter
Post by tom balabanov
For documentation supporting these contentions, see the
books
Post by Gregg C Levine
"Rapid
Post by S. Vetter
Post by tom balabanov
Development--taming wild software schedules," and "Code
Complete,"
Post by S. Vetter
Post by tom balabanov
both by Steve McConnell, as well as "Writing Solid
Code," by
Post by Gregg C Levine
Steve
Post by S. Vetter
Post by tom balabanov
Maguire.
The main reason people like myself complain about modern
practices
Post by S. Vetter
Post by tom balabanov
isn't because we're nostalgic, but because modern
economic
Post by Gregg C Levine
realities
Post by S. Vetter
Post by tom balabanov
(which are great, we all agree), have the unfortunate
side
Post by Gregg C Levine
effect of
Post by S. Vetter
Post by tom balabanov
not encouraging careful design as much as prior economic
realities
Post by S. Vetter
Post by tom balabanov
did.
Regards,
--Dan
Post by John Alvord
On Sat, Oct 26, 2002 at 07:41:36PM +0000,
marysmiling2002
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
I find a common method used by many young
programmers
Post by Gregg C Levine
nowadays
Post by S. Vetter
Post by tom balabanov
is to
Post by John Alvord
Post by marysmiling2002
blast out some code as quickly as they can type,
compile
Post by Gregg C Levine
it
Post by S. Vetter
Post by tom balabanov
and fix
Post by John Alvord
Post by marysmiling2002
errors until it compiles cleanly, and then start
running
Post by Gregg C Levine
it to
Post by S. Vetter
Post by tom balabanov
see
Post by John Alvord
Post by marysmiling2002
if it works. There is little to no human
validation of
Post by Gregg C Levine
the
Post by S. Vetter
Post by tom balabanov
code or
Post by John Alvord
Post by marysmiling2002
logic.
Even more sadly, this modern approach seems to
discourage
Post by S. Vetter
Post by tom balabanov
design, in
Post by John Alvord
Post by marysmiling2002
the sense that it is so cheap and easy to compile
and
Post by Gregg C Levine
run
Post by S. Vetter
Post by tom balabanov
programs
Post by John Alvord
Post by marysmiling2002
now that it requires a lot of discipline on the
part of
Post by Gregg C Levine
the
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
programmer to do designs on paper and put in design
validation
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
effort.
I think you've just described "Extreme Programming."
Bah.
Kids these days.
The factor you may not have considered fully is the
dramatic
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
reduction in
Post by John Alvord
cost of computing. Compared to (say) 1970, the $ cost
of
Post by Gregg C Levine
human
Post by S. Vetter
Post by tom balabanov
work has
Post by John Alvord
gone up maybe 4 times and the cost of computation has
been
Post by Gregg C Levine
reduced
Post by S. Vetter
Post by tom balabanov
by
Post by John Alvord
(say) 10,000. The obvious compensating strategy is to
lean
Post by Gregg C Levine
on
the
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
computing side - assuming that minimizing costs is the
goal.
Post by Gregg C Levine
If
Post by S. Vetter
Post by tom balabanov
you are
Post by John Alvord
after some other - purity/nostalgic - goal, then cost
isn't
Post by Gregg C Levine
part
Post by S. Vetter
Post by tom balabanov
of the
Post by John Alvord
equation...
john alvord
Yahoo! Groups Sponsor
ADVERTISEMENT
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to the Yahoo! Terms
of
Post by Gregg C Levine
Service.
Post by S. Vetter
Post by tom balabanov
[Non-text portions of this message have been removed]
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
Post by S. Vetter
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
juergen_dobrinski
2002-10-30 15:44:11 UTC
Permalink
Hello,

the original ATC system 9020 was based on the 360/50. It contained 3
cpu's just for IO (peripheral elements) and 3 cpu's for computation
(computatioal elements). The redundancy was used for high
availability. The architecture was not fully compatible to the 360
since more channels were supported and dynamic reconfiguration (also
of the storage) was possible. So there was a non standard operating
system used.

Later the 360/65 was used (9030). In that case only 2 PE's (still
360/20) but 3 cpu's (360/65) were used. For replacement the FAA
bought used machines (because the 360 was out of production at that
time) and modified them to become 9030.

The 9030D was used for the "host computer" which managed the flight
plan data wile a 9030E was used for controlling the displays. The
host computers were replaced by 3083 cpu's while the 9030E for
display control were in service up to 1997!

Later spare parts were even taken out of the FAA academy computer.
There were only few technicians remaining understanding the old
technology. A specific problem was related to the SLT modules which
were not that much encapsulated anymore because of their age. This
let to corrosion and some problems. Annother problem was the cabling.
Sometimes after a repair when closing the maintenance doors on the
cpu some cables broke because of their age. So the systems were
becomming a lower availability. As the backup systems for radar
display (running on Raytheon 760 computers) missed a couple of
features a replacement was finanly done in 1997 and the machines were
scraped.

Well, as far as I know one system survived and may be on display in
the Udvar-Hazy Center of the National Air and Space Museum in the
future.

Juergen
Post by rvjansen-qWit8jRvyhVmR6Xm/
Gregg,
I don't know what all the FAA's ATC system uses, but dating back to
1967-1969, the Custom Systems group in Poughkeepsie built a bunch of 9020s
(9010s???) and 9030s by modifying 360/50s and 360/65s to be the mainstays of
the system. Took radar and other information streams and drove the displays
in the various ATC cneters around the country.
I seem to recall that the system was core-resident, loaded from tape, no
disks.
Yes, it was all done in asembler (why they didn't use
BSL/PLS/whatever
Post by rvjansen-qWit8jRvyhVmR6Xm/
it's called now, I don't know). Around 1976/77, there were some Raytheon
minis (9000 something? 1200 something? Don't know) at least at
Houston
Post by rvjansen-qWit8jRvyhVmR6Xm/
Center according to a pilot friend who toured the facility. It
wasn't too
Post by rvjansen-qWit8jRvyhVmR6Xm/
clear just where they fitted into the system. Getting spare parts, or for
that matter finding CE/SE folk to do maintenance on the old 360s is
obviously a problem.
There have probably been some efforts to use later hardware, but some of
the internal mods to the machines, as well as the special
interfaces seems
Post by rvjansen-qWit8jRvyhVmR6Xm/
to be a problem. In any case, the FAA has blown a few billion with nothing
to show for the money in several planned modernization plans. Just
certifying and installing more modern radars is a real messy job.
Ada? Knowing the early history (it started out to be something called
DoD-1, for Department of Defense Language 1, which apparently wound up being
a congressional commitee designed by Bactrian camels) of Ada, I shouldn't be
surprised if there was some effort made which probably failed or was
abandoned.
Regards,
Ron T.
------------------------ Yahoo! Groups Sponsor ---------------------~-->
4 DVDs Free +s&p Join Now
http://us.click.yahoo.com/pt6YBB/NXiEAA/jd3IAA/W4wwlB/TM
---------------------------------------------------------------------~->
Dave Jones
2002-10-29 15:10:00 UTC
Permalink
----- Original Message -----
From: "Gregg C Levine" <hansolofalcon-XfrvlLN1Pqtfpb/***@public.gmane.org>
To: <hercules-390-***@public.gmane.org>
Sent: Monday, October 28, 2002 10:15 PM
Subject: RE: [hercules-390] Re: Running a mainframe at home is a interesting
IBM needs to listen


Hello from Gregg C Levine
Darned if I know. All I know, is what I've stated. I do know, that the
systems that the crumbling ATC systems are, happen to be extremely old.
Probably early S/390 systems, I'd need the specifics before I can
comment any further on that issue.
---------------------------------

Current FAA ATC computers are IBM 9221 systems (first introduced in 1990)
with special adaptors, etc. to interface with the radar and transponter data
inputs and graphical data displays in the control centers.

DJ


------------------------ Yahoo! Groups Sponsor ---------------------~-->
4 DVDs Free +s&p Join Now
http://us.click.yahoo.com/pt6YBB/NXiEAA/jd3IAA/W4wwlB/TM
---------------------------------------------------------------------~->
Peter D. Ward
2002-10-29 17:15:39 UTC
Permalink
I believe current ATC machines are G3-cpu boxes.

See 1999 link at:

http://www.gcn.com/archives/gcn/1998/december14/52a.htm

"The host replacement program is being run separately from the year 2000
program. The 3083 being used at the 20 en route centers is being replaced by the
IBM G3 series, Model No. 9672."

PDW
Post by Hugo Drax
----- Original Message -----
Sent: Monday, October 28, 2002 10:15 PM
Subject: RE: [hercules-390] Re: Running a mainframe at home is a interesting
IBM needs to listen
Hello from Gregg C Levine
Darned if I know. All I know, is what I've stated. I do know, that the
systems that the crumbling ATC systems are, happen to be extremely old.
Probably early S/390 systems, I'd need the specifics before I can
comment any further on that issue.
---------------------------------
Current FAA ATC computers are IBM 9221 systems (first introduced in 1990)
with special adaptors, etc. to interface with the radar and transponter data
inputs and graphical data displays in the control centers.
DJ
http://groups.yahoo.com/group/hercules-390
http://www.conmicro.cx/hercules
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get 128 Bit SSL Encryption!
http://us.click.yahoo.com/JjlUgA/vN2EAA/kG8FAA/W4wwlB/TM
---------------------------------------------------------------------~->
S. Vetter
2002-10-29 05:34:32 UTC
Permalink
I had heard that it was running on hardware that still used vacuum tubes and
that since the hardware was so out of date they had to swap parts from other
locations that has them. There was a plan underway a few years ago to
replace such systems. I have no idea on what the current status is. You
probably could get a god deal on some old-old equipment for those that are
into that sort of thing.

Scott

--------
Post by marysmiling2002
What an interesting thread!
Speaking of the 370/390 in aerospace applications, I read somewhere
that the air traffic control system is still all 360 assembly
language, but it runs on S/390 hardware these days. Anyone know if
that's true?
Somewhere I read that there was a movement afoot to rewrite it in
ADA that failed.
--Dan
Post by Gregg C Levine
Hello again from Gregg C Levine
Good so far. Yes, the Galileo is wearing a cluster of RCA
CDP1802s, and
Post by Gregg C Levine
they are the rad hard variant. The Shuttle? Three of them are
wearing
Post by Gregg C Levine
processors based on the ones that we model inside Hercules. I
don't know
Post by Gregg C Levine
what the Endeavor is wearing. What makes the existence of Hercules
important, is that it's bringing people like us together. And I
agree
Post by Gregg C Levine
with you, regarding the thoughts you've espoused so far.
-------------------
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke." Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Monday, October 28, 2002 8:23 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is a
interesting IBM
needs to listen
Gregg,
I'm not sure what is on board the Galileo, but I wouldn't be
surprised if
it's something very similar to the rather obscure RCA-
manufactured
Post by Gregg C Levine
microprocessor core on the Voyagers. A little less computing
power
Post by Gregg C Levine
than a
decent handheld calculator; reason being that the poor thing had
to be
Post by Gregg C Levine
certified to survive in a really hostile radiation environment,
which
Post by Gregg C Levine
takes
a lot of time to do as well as being *very* expensive for a
really
Post by Gregg C Levine
tiny
market.
Don't know what the shuttles have on board now, but one of
the
Post by Gregg C Levine
main
systems in the early days was the same basic machine that was
used on
Post by Gregg C Levine
Apollo
and was the weapons controller and nav computer on the F111: the
IBM 4
Post by Gregg C Levine
pi
mini. I do know that (this was ca. 1975) the folks down at Clear
Lake
Post by Gregg C Levine
that
worked for GE and had to do something with the telemetry systems
didn't have
much good to say about IBM's folks when it came to getting clear
specs
Post by Gregg C Levine
on
what the data streams would look like. I don't know whether the
IBMers
Post by Gregg C Levine
didn't really understand what the importancve was or if they
were sort
Post by Gregg C Levine
of
unofficially trying to sandbag the competition/other contractors
on
Post by Gregg C Levine
the
Shuttle project.
Personally, I don't have much in the way of "good thoughts"
about
Post by Gregg C Levine
the
STS; for scientific worth unmanned missions have a distinct
advantage.
Post by Gregg C Levine
Besides that, the Shuttle came in way over budget and under in
lift
Post by Gregg C Levine
capability so that Galileo had to be substantially modified and
limited
because it had to go off on the shuttle instead of Atlas/Centaur
like
Post by Gregg C Levine
the
Voyagers. There was also a major negative impact on the
International
Post by Gregg C Levine
Solar
Polar that had been planned as a joint project between NASA and
ESA.
Post by Gregg C Levine
But NASA, at least that part at JPL and the folks running
DSN,
Post by Gregg C Levine
were/are
dedicated and sharp. The TPP I worked on was a bad example or a
great
Post by Gregg C Levine
one
when considering how *not* to build and maintain major software
systems,
depending on what one is trying to argue. It was a little
awesome to
Post by Gregg C Levine
sit in
the control booth in MCCC and watch, in real time, the pictures
being
Post by Gregg C Levine
built,
one scan line at a time, on the terminals as Voyagers got nearer
and
Post by Gregg C Levine
nearer
to Jupiter and those truly strange moons such as Io. Pity the
budget
Post by Gregg C Levine
problems disrupted so many careers at JPL.
Regards,
Ron T.
----- Original Message -----
Sent: Monday, October 28, 2002 12:04 PM
Subject: RE: [hercules-390] Re: Running a mainframe at home is a
interesting
IBM needs to listen
Hello again from Gregg C Levine
**Grins at his collection of things from that area of work.**
Ron, you
Post by Gregg C Levine
just said what I wanted to say. Good!!!! But do you know what the
Galileo spacecraft is using for its systems? Or the Space
Shuttle?
Post by Gregg C Levine
Would
anyone be surprised if I told folk what's really running the
Space
Post by Gregg C Levine
Shuttle?
-------------------
------------------------------------------------------------
"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke." Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )
-----Original Message-----
Sent: Monday, October 28, 2002 12:57 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is
a
Post by Gregg C Levine
interesting IBM
needs to listen
NASA is different, apparently, depending on where you look. I
worked
Post by Gregg C Levine
on the
telemetry preprocessor for the Voyager project (I even have a
framed
Post by Gregg C Levine
certificate saying I was awarded the Public Service Group
Achievement
Award
for Voyager Ground Data System Development and Operations) at
JPL.
Post by Gregg C Levine
The machine was a highly modified Univac MTC (essentially,
they
Post by Gregg C Levine
sort of
chopped the MTC in two so they had a string of Univac 1530
uniprocessors)
they somehow acquired from the US Navy. The programming
language
Post by Gregg C Levine
(and
I'm
using that term *very* loosely) was an assembler that ran on a
Moccomp
IV.
The assmebler was generated by a company in Pasadena (no, I'll
not
Post by Gregg C Levine
name it -
what I have to say is not a libel, but I do fear being sued for
slander) and
the stupid thing couldn't generate all the instructions the
MTC/1530
Post by Gregg C Levine
had. As
a result, there were *lots* of octal constants in the source
language
files.
As problems/bugs were found, we had to make fixes (duh)
and
Post by Gregg C Levine
test
them
on either real spacecraft data or synthetic data streams. We'd
load
Post by Gregg C Levine
out
standalone system from tape (no, there wasn't what you'd call
an OS
Post by Gregg C Levine
with
apps runnning under it - the whole thing was one giant slobber
of
Post by Gregg C Levine
code) and
load our octal patch decks in. For "efficiency", each card
held an
Post by Gregg C Levine
address
for the three octal instructions (max) on that card. I found
out
Post by Gregg C Levine
that
I
could put just *one* patch on a card and key in the assembler
source
Post by Gregg C Levine
after
it as sort of a comment. I was told in no uncertain terms that
I was
Post by Gregg C Levine
not to
do such a thing; so about once a month we collected all out
damned
Post by Gregg C Levine
patches
and hand-backtranslated the octal to assembler source
(hopefully we
Post by Gregg C Levine
hadn't
lost our minds yet) and reassembled a new version of the TPP.
There
Post by Gregg C Levine
were
three machine strings: one for each of the two Voyager
spacecraft
Post by Gregg C Levine
and
the
third for a hot backup and program development/testing.
Productivity? Efficiency? Whuzzat? I don't know what my
time
Post by Gregg C Levine
was
billed
at, but I was paid fairly well. As a contract programmer, I
loved
Post by Gregg C Levine
what
NASA/JPL was doing to me as a taxpayer ....Sort of like what a
fairly
good
computer scientist said to an IBM executive at a SHARE
conference -
Post by Gregg C Levine
"As a
shareholder, I love how you're screwing me as a customer." And
since
Post by Gregg C Levine
his
budget was a few tens of thousands per month, the IBM guy
didn't do
Post by Gregg C Levine
anything
except flash a really insincere smile and quickly move off to
another
group.
Regards,
Ron Tatum
BTW, I knew I was in deep stuff when I was working on a
manufacturing
information system for Systems Manufacturing
Division/Components/World
Trade
Corporation and showed the senior CE at the site a message I
got
Post by Gregg C Levine
from
either
OS/360 Rel 13 or Rel 15/16 that indicated I should raise an
APAR
Post by Gregg C Levine
with
my IBM
CE and the old f..t asked me "What's an APAR?" Ah, yes, the
cobbler's
children syndrome.
----- Original Message -----
Sent: Sunday, October 27, 2002 11:24 PM
Subject: Re: [hercules-390] Re: Running a mainframe at home is
a
Post by Gregg C Levine
interesting
IBM needs to listen
Post by S. Vetter
(biting my tongue to a great extent). You are probably
correct
Post by Gregg C Levine
in
the
The
Post by Gregg C Levine
beta
tester's have to
Post by S. Vetter
PAY for being a tester and to report back the bugs found.
Sort of
Post by Gregg C Levine
like
saying "Hey! I want to be a crash test dummy. And I'll even
pay to
Post by Gregg C Levine
run the
risk of being
Post by S. Vetter
killed or maimed for life!"
The military I heard also behaves like NASA, they don't
use up
Post by Gregg C Levine
to
date
software either.
Post by S. Vetter
Scott
----------
Post by tom balabanov
one should realize that most of the modern software
companies
Post by Gregg C Levine
don't have
to go through all the exhaustive testing. This expense is
passed on
Post by Gregg C Levine
to
the
consumer.
Post by S. Vetter
Post by tom balabanov
Microsoft et al. are pushing code out ,getting the major
bugs
Post by Gregg C Levine
out and
relying on alpha and beta sites to get the real bugs out,
Post by S. Vetter
Post by tom balabanov
that I think is why they don't spend as much time up
front on
Post by Gregg C Levine
the
design, they don't have to pay for the debugging so why should
they
Post by Gregg C Levine
rigoriously design.
Post by S. Vetter
Post by tom balabanov
That is why NASA spends so much up front,and they don't
use
Post by Gregg C Levine
leading
edge software
Post by S. Vetter
Post by tom balabanov
----- Original Message -----
From: marysmiling2002
Sent: Sunday, October 27, 2002 12:38 PM
Subject: [hercules-390] Re: Running a mainframe at home
is a
Post by Gregg C Levine
interesting IBM needs to listen
Post by S. Vetter
Post by tom balabanov
Hi John,
Not sure if I'm reading your post right.
Did you mean to say that it is less expensive to
construct
Post by Gregg C Levine
software
Post by S. Vetter
Post by tom balabanov
without designing it on paper first, or performing the
other
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
validation steps in the classic process?
Or did you mean that the lower quality of modern
software vs.
Post by Gregg C Levine
older
Post by S. Vetter
Post by tom balabanov
software is less costly now than it used to be due to
the fact
Post by Gregg C Levine
that
Post by S. Vetter
Post by tom balabanov
computing resources are less expensive?
It seems to me there are two main themes to this
Because of the change in cost factors, it is now much
cheaper
Post by Gregg C Levine
to
Post by S. Vetter
Post by tom balabanov
throw hardware at a problem than software. Because of
that,
Post by Gregg C Levine
highly
Post by S. Vetter
Post by tom balabanov
efficient systems like the S/360 architecture and its
descendants
Post by S. Vetter
Post by tom balabanov
are no longer essential for most problems. Designs that
are a
Post by Gregg C Levine
lot
Post by S. Vetter
Post by tom balabanov
less clever suffice most of the time. Due to this change,
companies
Post by S. Vetter
Post by tom balabanov
that make software are unwilling to expend resources
making it
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
highly efficient, scalable, or whatever. Why should they?
On the other hand, it's no less expensive for the
software to
Post by Gregg C Levine
fail
Post by S. Vetter
Post by tom balabanov
than before. A simple application running on a million
desktop
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
machines still costs hugely in lost data or productivity
if it
Post by Gregg C Levine
fails
Post by S. Vetter
Post by tom balabanov
routinely, due to the fact that it is failing on a millon
machines
Post by S. Vetter
Post by tom balabanov
instead of just one (though the cost of each single
failure
Post by Gregg C Levine
may
be
Post by S. Vetter
Post by tom balabanov
much, much lower than the cost of a single failure in the
mainframe
Post by S. Vetter
Post by tom balabanov
days).
support.
It's
Post by S. Vetter
Post by tom balabanov
much more costly to configure, maintain, and support
thousands
Post by Gregg C Levine
of
Post by S. Vetter
Post by tom balabanov
desktop boxes than one mainframe. One of the costs of
buggy
Post by Gregg C Levine
software
Post by S. Vetter
Post by tom balabanov
is support, and it can be very high due to the fact that
it
Post by Gregg C Levine
must
Post by S. Vetter
Post by tom balabanov
be "fixed" over and over again, in many different
physical
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
locations.
Due to support costs, trends in system design now favor
centralizing
Post by S. Vetter
Post by tom balabanov
critical functions in client/server configurations. Once
you
Post by Gregg C Levine
move to
Post by S. Vetter
Post by tom balabanov
a single server serving thousands of users, we are right
back
Post by Gregg C Levine
to
the
Post by S. Vetter
Post by tom balabanov
need for reliability, scalability, and availability we
used to
Post by Gregg C Levine
have,
Post by S. Vetter
Post by tom balabanov
only there is much more hardware available more cheaply
now,
Post by Gregg C Levine
so
we
Post by S. Vetter
Post by tom balabanov
still don't need the kind of efficient use of hardware
resources
we
Post by S. Vetter
Post by tom balabanov
once did. At any rate, the place of the mainframe in the
modern
Post by S. Vetter
Post by tom balabanov
world is more often as a server than as a host (with a
few
Post by Gregg C Levine
well
Post by S. Vetter
Post by tom balabanov
known exceptions). I would contend that it is still one
of the
Post by Gregg C Levine
best
Post by S. Vetter
Post by tom balabanov
server systems in existence, though its high cost of both
procurement and operation mean its only suitable for the
large
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
enterprise. No surprise that this is exactly how IBM is
positioning
Post by S. Vetter
Post by tom balabanov
it these days.
It's worth raising the question of whether the way we
write
Post by Gregg C Levine
software
Post by S. Vetter
Post by tom balabanov
now is better than the way we wrote software back when
computing
was
Post by S. Vetter
Post by tom balabanov
much more expensive.
Many people (myself included) casually (unscientifically)
believe
Post by S. Vetter
Post by tom balabanov
that it is actually less expensive to follow a cycle of
design,
then
Post by S. Vetter
Post by tom balabanov
validation, then lower level design, then validation,
and so
Post by Gregg C Levine
forth
Post by S. Vetter
Post by tom balabanov
before coding, and then validating the code on paper
before
Post by Gregg C Levine
testing
Post by S. Vetter
Post by tom balabanov
begins. This lifecycle model is called the "waterfall"
model
Post by Gregg C Levine
by
Post by S. Vetter
Post by tom balabanov
process experts, and, due in part to the fact that it is
older,
it
Post by S. Vetter
Post by tom balabanov
is not considered to be a "cutting edge" lifecycle model.
Correcting a
Post by Gregg C Levine
problem
Post by S. Vetter
Post by tom balabanov
in a high level design is nearly always much less
expensive
Post by Gregg C Levine
than
Post by S. Vetter
Post by tom balabanov
correcting the same problem in a lower level design.
Correcting
a
Post by S. Vetter
Post by tom balabanov
problem in a design is nearly always much less expensive
than
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
correcting the problem in an implementation of that
design.
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
Carefully considering high level options before taking a
certain
Post by S. Vetter
Post by tom balabanov
direction can dramatically decrease the amount of work
needed
Post by Gregg C Levine
to
Post by S. Vetter
Post by tom balabanov
solve a problem, since some high-level design directions
are
Post by Gregg C Levine
much
Post by S. Vetter
Post by tom balabanov
more expensive than others.
All of these realitites would suggest that it is usually
cheaper
to
Post by S. Vetter
Post by tom balabanov
use a traditional design process than it is to just start
coding
and
Post by S. Vetter
Post by tom balabanov
then go through many iterations until the software
system ends
Post by Gregg C Levine
up in
Post by S. Vetter
Post by tom balabanov
its final state.
With respect to qualtiy, as opposed to cost, these
realities
Post by Gregg C Levine
also
Post by S. Vetter
Post by tom balabanov
exist: Research suggests that black box testing will find
about
40%
Post by S. Vetter
Post by tom balabanov
of the flaws in a software system, whereas reading the
code
Post by Gregg C Levine
will
Post by S. Vetter
Post by tom balabanov
find about 90% of them. Furthermore, the bugs found in
black
Post by Gregg C Levine
box
Post by S. Vetter
Post by tom balabanov
testing will tend to be different kinds of bugs than
those
Post by Gregg C Levine
found
in
Post by S. Vetter
Post by tom balabanov
inspection, meaning that using both methods makes it
easy to
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
approach 100% of the bugs in the system. Also, carefully
considered
Post by S. Vetter
Post by tom balabanov
high level design decisions lead to simpler solutions.
Understanding
Post by S. Vetter
Post by tom balabanov
the design thoroughly before coding leads to better
organized
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
implementations (well designed module boundaries, etc.),
which
Post by Gregg C Levine
not
Post by S. Vetter
Post by tom balabanov
only lowers maintenance cost, but also results in
systems that
Post by Gregg C Levine
are
Post by S. Vetter
Post by tom balabanov
both easier to validate, and, generally, more valid from
the
Post by Gregg C Levine
start.
Post by S. Vetter
Post by tom balabanov
All of these realities would suggest that software
designed
Post by Gregg C Levine
using
Post by S. Vetter
Post by tom balabanov
traditional methods would usually have higher quality
than
Post by Gregg C Levine
software
Post by S. Vetter
Post by tom balabanov
done using "Extreme Software Engineering" type lifecycle
models.
Post by S. Vetter
Post by tom balabanov
Antecdotal evidence supports these observations.
Many "old
Post by Gregg C Levine
timers"
Post by S. Vetter
Post by tom balabanov
can tell stories about how teams performed what, by
today's
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
standards, would be considered extraordinary feats of
software
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
engineering very quickly, and with very high quality.
This is
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
partially due to the methodologies, and partially due to
the
Post by Gregg C Levine
fact
Post by S. Vetter
Post by tom balabanov
that those older systems (e.g. IBM mainframes) provided
much
Post by Gregg C Levine
more
Post by S. Vetter
Post by tom balabanov
enlightened system interfaces and programming support
than
Post by Gregg C Levine
newer
Post by S. Vetter
Post by tom balabanov
systems. The latter is, of course, due to economic
factors.
Post by Gregg C Levine
When
you
Post by S. Vetter
Post by tom balabanov
can charge millions for each system sold, a lot more
resources
Post by Gregg C Levine
can
Post by S. Vetter
Post by tom balabanov
be expended when designing it.
Still, since we think it's actually both better AND
cheaper to
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
design things on paper, we should still be able to
realize
Post by Gregg C Levine
both
Post by S. Vetter
Post by tom balabanov
gains in development efficiency AND quality when
designing and
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
implementing modern systems to run on modern system
architectures.
Post by S. Vetter
Post by tom balabanov
For documentation supporting these contentions, see the
books
Post by Gregg C Levine
"Rapid
Post by S. Vetter
Post by tom balabanov
Development--taming wild software schedules," and "Code
Complete,"
Post by S. Vetter
Post by tom balabanov
both by Steve McConnell, as well as "Writing Solid
Code," by
Post by Gregg C Levine
Steve
Post by S. Vetter
Post by tom balabanov
Maguire.
The main reason people like myself complain about modern
practices
Post by S. Vetter
Post by tom balabanov
isn't because we're nostalgic, but because modern
economic
Post by Gregg C Levine
realities
Post by S. Vetter
Post by tom balabanov
(which are great, we all agree), have the unfortunate
side
Post by Gregg C Levine
effect of
Post by S. Vetter
Post by tom balabanov
not encouraging careful design as much as prior economic
realities
Post by S. Vetter
Post by tom balabanov
did.
Regards,
--Dan
Post by John Alvord
On Sat, Oct 26, 2002 at 07:41:36PM +0000,
marysmiling2002
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
I find a common method used by many young
programmers
Post by Gregg C Levine
nowadays
Post by S. Vetter
Post by tom balabanov
is to
Post by John Alvord
Post by marysmiling2002
blast out some code as quickly as they can type,
compile
Post by Gregg C Levine
it
Post by S. Vetter
Post by tom balabanov
and fix
Post by John Alvord
Post by marysmiling2002
errors until it compiles cleanly, and then start
running
Post by Gregg C Levine
it to
Post by S. Vetter
Post by tom balabanov
see
Post by John Alvord
Post by marysmiling2002
if it works. There is little to no human
validation of
Post by Gregg C Levine
the
Post by S. Vetter
Post by tom balabanov
code or
Post by John Alvord
Post by marysmiling2002
logic.
Even more sadly, this modern approach seems to
discourage
Post by S. Vetter
Post by tom balabanov
design, in
Post by John Alvord
Post by marysmiling2002
the sense that it is so cheap and easy to compile
and
Post by Gregg C Levine
run
Post by S. Vetter
Post by tom balabanov
programs
Post by John Alvord
Post by marysmiling2002
now that it requires a lot of discipline on the
part of
Post by Gregg C Levine
the
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
programmer to do designs on paper and put in design
validation
Post by S. Vetter
Post by tom balabanov
Post by John Alvord
Post by marysmiling2002
effort.
I think you've just described "Extreme Programming."
Bah.
Kids these days.
The factor you may not have considered fully is the
dramatic
Post by Gregg C Levine
Post by S. Vetter
Post by tom balabanov
reduction in
Post by John Alvord
cost of computing. Compared to (say) 1970, the $ cost
of
Post by Gregg C Levine
human
Post by S. Vetter
Post by tom balabanov
work has
Post by John Alvord
gone up maybe 4 times and the cost of computation has
been