|distributed.net Faq-O-Matic : Project: OGR (Optimal Golomb Rulers) : Why is OGR disabled on my machine?|
OGR may be automatically disabled for non-preemptive operating environments
running on low(er)-end hardware.
The current OS and CPU combinations where this automatic disabling is currently effective (subject to change) are as follows:
In non-preemptive environments, the client controls its run:yield quantum itself, that is, it runs for a limited, specific, number of keys/nodes (called the timeslice), then yields, then runs, then yields, and so on.
The timeslice is computed dynamically while the computer runs, and generally follows this formula: crunch_rate_per_second/yield_frequency (*), where yield_frequency is a semi-static value representing the minimum number of times per second that the client has to yield control for the machine to remain responsive, and crunch_rate_per_second is continuously calculated.
For OGR, which has significant overhead, the larger the timeslice (upto a limit), the more efficient the core is. To put it another way: at small timeslices, which is what would be in effect on slow machines, the core ends up spending more time in starting and stopping than it does in actually working.
Moreover, the time slice cannot be honored precisely, and under some circumstances, the OGR core could end up doing several hundred thousand more nodes than it was told to do. In less critical environments such as Win16 and MacOS such occasional "jerkiness" may be tolerable, and it is left to the user's discretion to disable OGR if they feel such behaviour is unacceptable. In mission critical environments such as NetWare however, any jerkiness may have disastrous consequences, and for NetWare, OGR is completely disabled.
© Copyright distributed.net 1997-2013 - All rights reserved