|distributed.net Faq-O-Matic : Project: OGR (Optimal Golomb Rulers) : Why is progress of OGR-28 stubspaces 1 and 2 so slow?|
The short answer is: because stubs in these two stubspaces, and only these two, are very big. Their average size is expected to be up to ten times bigger than the average OGR-27 stub (800 Gnodes vs 80 Gnodes). The biggest stub seen so far (March 2014) is almost 2200 Gnodes.
Stubs in stubspace 28.3 will on average (mean) be the same size as stubs in OGR-27 (80 Gnodes). Some of them could be big but there will be a lot of tiny ones.
Since estimated project completion dates are based on the number of stubs completed per day (not Gnodes), these dates currently seem a long time in the future. Please be patient and keep on crunching. :)
As soon as we complete the first two stubspaces and start work on 28.3, estimated completion dates will return to reasonable values. We think we will complete it in about 5 years (the same as OGR-27); perhaps faster with a little help from Moore's Law.
More technical details: In OGR-NG projects (OGR-26, OGR-27 and OGR-28), only the last stubspace is the "real" one. All other stubspaces are "artificial"; they're used only to keep stubs combined at higher levels, with fewer marks. You can see combined stubs being worked on in your client as they have the asterisk (*) on the status line next to the number of marks.
We choose the combine ratio ourselves so that we can control the estimated average stub size. For OGR-28, we have to choose very "tight" combining, raising the average stub size significantly.
It was necessary to increase the size of stubs in 28.1 and 28.2 to enable us to decrease the number of stubs in 28.3.
Huge tree files become very difficult to manage - OGR-27 was finished with a tree file of 40 GB. Even a simple rescan of this file required about 2 hours, a more complex recycle procedure took about 8 hours. OGR-28 will have even more stubs than OGR-27, although billions of them will require only a few seconds to complete. So we have to combine them as much as possible.
© Copyright distributed.net 1997-2013 - All rights reserved