In addition to the information regarding the Hercules Shared Device feature presented in the "Reliability of Hercules Shared Dasd Feature" thread (https://groups.yahoo.com/neo/groups/hercules-390/conversations/messages/79719), the following information is presented in relation to the SYNCIO dasd device option that was mentioned:
> Analysis of SYNCIO and the Hyperion Channel Updates
>
> SYNCIO was originally designed for slower single core
> systems (processors designed before 2004, at the latest).
> On these systems, not having to create a new device thread
> (or take time for task switching) saved a substantial
> amount of time and significantly improved performance.
>
> In multi-core systems, this advantage is lost, as SYNCIO
> now ties up the CPU thread from performing additional work
> while the I/O operation is being processed (which is the
> complete opposite of the way start-i/o and start-subchannel
> work on mainframes).
>
> With the Hyperion channel updates done by Mark Gaubatz,
> the wait time for an available device thread was virtually
> eliminated. Once running, the channel subsystem now
> executes nearly 100% of queued I/O requests without having
> to wait for thread creation. If a device thread is available,
> but waiting, execution will begin within 100ms at the outside,
> should a signal_condition status be cleared before response
> (a hole in the Posix definition/operation of signal_condition).
>
> When thread creation is required, the next entering or exiting
> I/O will cause an additional thread to be created if the queue
> is not empty after dequeuing the next I/O request, as well as
> the queuing mechanism itself creating a new device thread early
> when an out-of-threads condition is detected. This also pre-
> vents a rush of new concurrent thread requests tying up critical
> kernel code paths and effectively stopping all other productive
> use of the machine (observed on both Linux and Windows).
>
> It has been observed that the 100ms limit also keeps the idle
> device tasks active in the various schedulers, without placing
> them on a true idle list, from which it may take significantly
> longer to redispatch under some Windows and Linux setups.
>
> Device threads are not terminated unless they have been idle
> for at least two seconds, are not subject to user's devtmax
> setting (but always subject to devtmax -1, another holdover),
> and there are more than four idle threads. For devtmax > 0,
> devtmax threads are maintained, once created. Changing the
> devtmax value on the fly will permit the creation of additional
> device threads, or trimming of existing threads once their
> respective I/O have completed, or are idle.
>
> This design also keeps the device subsystem from surging,
> ramping up to the demand and then dropping the "excess" threads
> too quickly, only to have another burst of I/O activity create
> another round of device-thread thread creation and immediate
> trimming. It is possible that three additional tuning "knobs"
> could be added: first to define the minimum number of threads
> to maintain for devtmax 0, second, to specify an idle time
> before excess thread trimming is triggered, and finally, what
> value constitutes "excess" (the current default is four).
>
> Operational Notes:
>
> * For Windows, devtmax > 0 should ALWAYS be specified;
> if Hercules is in operation long enough with heavy dynamic
> I/O loads and devtmax -1 and 0, Hercules will crash Windows
> due to the number of threads created and destroyed. This
> condition has been seen on Windows 7, 8, & 8.1, but has
> not yet been verified on Windows 10. On an early Intel
> Core i7 laptop with plenty of resources, the condition
> can be raised in as little as 3.5 days on Windows 7 using
> devtmax 0; this limit was hit using an 8-core Intel Xeon
> system from 2008 in less than 40 hours.
>
> * For performance with heavy I/O loads, don't define more
> than 3/4 of the real cores as a CPU engine type; you may
> still need to further reduce the number of CPU engines
> and/or use devtmax > 0 to maintain enough freedom for
> the host operating system to not panic and thrash under
> the load.
>
> * For planning purposes, a hyperthreaded core is roughly
> equal to 0.20-0.33 real cores. Core i7 laptops should
> not be run at more than roughly 70-75% of total capability
> for long periods of time, depending on the cooling
> capabilities of the laptop (clock frequency adjustments
> takes place to regulate the operational temperature of
> the chip, resulting in unwanted thermal cycling, along
> with irregular response and performance times). Higher
> performance may be achieved WITHOUT hyperthreading, and
> thermal cycling may not occur as frequently, permitting
> operation at a higher percentage of total capability.
--
"Fish" (David B. Trout)
Software Development Laboratories
http://www.softdevlabs.com
mail: ***@softdevlabs.com