MaddGamer wrote:Great to see a new version, found a bug for ya though;
When manually setting --seed to a value > 2147483647, the generate seed = 2147483647 instead of the entered value. This was on Windows (Win7 x64 CUDA and OpenCL).
Interesting. That looks like 2^31. It may clip above that - my seed is definitely a 32-bit number. It should be using an unsigned int, not a signed int - so I'll check into that. But, in general, I'm not sure that's really a "bug" so much as "Designed limit." Or is it that it's on Windows only, and Linux works up to 2^32?
Found this when I was trying to validate some tables between my CUDA and OpenCL systems. Otherwise speeds are nice for OpenCL. Pulling ~540M/s doing SHA1 len7 chr95 on a 5850, over 3x the speed of my Quadro FX 4800 card.
Cool. I don't have a fully optimized SHA1 algorithm for ATI yet - I'm not using BFI_INT. Need to fix that.

Should be worth some speed on SHA1/etc.
I did waste a day of compute on my OpenCL system since I forgot that I had overclocked the GPU on that system

. With the overclock that I was running I was getting silent corruption of the data and repeated runs with the same seed value resulted in generated tables with different file hashes. I still have the GPU overclocked a bit, but it is down where I could verify that the data produced was consistent.
Yep. That'll happen. There's also the GRTVerify tool that will check chains - you can run it against a table and it will, by default, check every chain. I'd be interested to see some of the corruption you're seeing. But, yes, overclocking is bad for compute, m'kay?

Thanks for the feedback, and glad it's working!