distributed.net
(Answer) (Category) distributed.net Faq-O-Matic : (Category) Project: OGR (Optimal Golomb Rulers) : (Answer) Why is OGR disabled on my machine?
OGR may be automatically disabled for non-preemptive operating environments running on low(er)-end hardware.

The current OS and CPU combinations where this automatic disabling is currently effective (subject to change) are as follows:

  • RISC OS/ARM: (if the client is not being preemptively tasked)
    all
  • MacOS/68k
    all 680x0 processors
  • MacOS/PPC:
    PPC601
  • Windows16/x86:
    386-class, 486-class, 586-class (incl. Cx6x86 and K5)
  • NetWare/x86:
    all

In non-preemptive environments, the client controls its run:yield quantum itself, that is, it runs for a limited, specific, number of keys/nodes (called the timeslice), then yields, then runs, then yields, and so on.

The timeslice is computed dynamically while the computer runs, and generally follows this formula: crunch_rate_per_second/yield_frequency (*), where yield_frequency is a semi-static value representing the minimum number of times per second that the client has to yield control for the machine to remain responsive, and crunch_rate_per_second is continuously calculated.

For OGR, which has significant overhead, the larger the timeslice (upto a limit), the more efficient the core is. To put it another way: at small timeslices, which is what would be in effect on slow machines, the core ends up spending more time in starting and stopping than it does in actually working.

Moreover, the time slice cannot be honored precisely, and under some circumstances, the OGR core could end up doing several hundred thousand more nodes than it was told to do. In less critical environments such as Win16 and MacOS such occasional "jerkiness" may be tolerable, and it is left to the user's discretion to disable OGR if they feel such behaviour is unacceptable. In mission critical environments such as NetWare however, any jerkiness may have disastrous consequences, and for NetWare, OGR is completely disabled.


*: For all practical purposes, the formula mentioned above is sufficient, but there are some gotchas:
yield duration: if another task runs for a not insignificant amount of time, or the OS imposes a maximum "task" switch frequency to avoid scheduling the same task too often, then this "yield duration" is going to affect the crunch_rate_per_second part of the computation, and the timeslice will be lower.
timer resolution: The formula mentioned above implies that that the smallest run:yield quantum that can be computed accurately is equal to the resolution of the monotonic time source. Using longer timing periods and averaging the results makes it possible to increases the accuracy of the computation, but this can still be no better than half the timer's period. This means that if the clock's resolution was say, 100ms, then the client can yield (at best) only every 50ms.

This document is: http://faq.distributed.net/?file=188
[Search] [Appearance] [Show Expert Edit Commands]
This is a Faq-O-Matic 2.721.test.

© Copyright distributed.net 1997-2013 - All rights reserved