Methodology

Discussion of the upcoming GPU accelerated rainbow table implementation
  • Ads

Methodology

Postby alexbobp » Mon Jan 26, 2009 7:34 pm

I'm wondering how you intend to do rainbow tables with a GPU. I guess table generation is simple, but do you plan to support cracking with GPUs?

I once considered porting rainbowcrack to cuda, but I never really made progress on that because I lacked expertise and free time...

The way I've thought of for doing this is that you could do the main hashing on the CPU, but every time you get a match, instead of immediately switching to do a chain walk, you just save the alarm. When you have enough of them, you can fire off a volley to the GPU, which will do the chain walks in parallel. Since most of the time of a rainbow table attack is spent pursuing false alarms, I think this would both be practical (not expecting large memory requirements from the GPU) and give a considerable speedup.

Let me know if this sounds doable or if my idea is stupid. As I said, I haven't gotten to work with rainbowcrack's source nearly as much as I'd have liked to.
alexbobp
 
Posts: 4
Joined: Mon Jan 26, 2009 6:53 pm

Re: Methodology

Postby Bitweasil » Mon Jan 26, 2009 7:39 pm

Oh. Should probably have posted something here.

GPU accelerated cracking is required to make use of the long chains that keep size and bandwidth to sane levels.

Rainbowcrack uses a chain length of 10k.

My proof of concept code used a chain length of 100k.

The production code will likely use chain lengths of 500k or longer.

The GPUs accelerate three things:
- Table generation
- Candidate hash generation
- Found chain regeneration

The CPU only handles merging table parts and searching the tables.

I have working code for all of this, just need to make it a little bit more robust before I bring the system online.
Bitweasil
Site Admin
 
Posts: 912
Joined: Tue Jan 20, 2009 4:26 pm

Re: Methodology

Postby blazer » Wed Jan 28, 2009 12:24 am

I'm wondering whether such high chain lengths will impede the table searching process, would it take minutes, hours or days?
blazer
 
Posts: 104
Joined: Fri Jan 23, 2009 10:18 am

Re: Methodology

Postby Bitweasil » Wed Jan 28, 2009 3:05 am

blazer wrote:I'm wondering whether such high chain lengths will impede the table searching process, would it take minutes, hours or days?


It depends on what you're searching with, and is why GPUs are required for this.

A chain length of 100k, that would take RainbowCrack 45 mins to search on a CPU, one of my GPUs can do (with the optimized algorithm) the same thing in 15s.

That's the type of speedups we're talking about over RainbowCrack - 200x or more through a mix of GPU & algorithm.
Bitweasil
Site Admin
 
Posts: 912
Joined: Tue Jan 20, 2009 4:26 pm

Re: Methodology

Postby Sc00bz » Wed Jan 28, 2009 8:19 am

So you get a speed of 333 M links/sec on a GPU (GTX 260?). :( I was hoping for more.
Sc00bz
 
Posts: 93
Joined: Thu Jan 22, 2009 9:31 pm

Re: Methodology

Postby Bitweasil » Wed Jan 28, 2009 4:52 pm

Sc00bz wrote:So you get a speed of 333 M links/sec on a GPU (GTX 260?). :( I was hoping for more.


Actually, it's closer to 410M, and that was on an 8800 GTX - not the GTX260. I don't recall what exactly the GTX260 turns, as I got the 216sp card after I finished my primary development.

I should be able to run around 450-500M links per second on a high end card - I've learned some other optimizations too while working on the brute forcers.
Bitweasil
Site Admin
 
Posts: 912
Joined: Tue Jan 20, 2009 4:26 pm

Re: Methodology

Postby wintermute » Tue Feb 17, 2009 9:43 pm

What sizes of tables are you anticipating? It would be amazing to get something like a 95-character table up to length 8, but that would be like 50 TB...

Also, are you going to be releasing the tables that are generated?
wintermute
 
Posts: 7
Joined: Tue Feb 17, 2009 9:34 pm

Re: Methodology

Postby Bitweasil » Wed Feb 18, 2009 12:17 am

wintermute wrote:What sizes of tables are you anticipating? It would be amazing to get something like a 95-character table up to length 8, but that would be like 50 TB...

Also, are you going to be releasing the tables that are generated?


95 to length 8 is easy. 95 to length 9... that's the tricky bits.

You're more or less correct on table sizes, though. The len9 tables will be huge.

As such, if by "releasing" you mean "Ship me a 50TB storage array & I'll fill it for you," yes. If by "releasing" you mean "Generally available for download," probably not, as there's no practical way to share 50TB.
Bitweasil
Site Admin
 
Posts: 912
Joined: Tue Jan 20, 2009 4:26 pm

Re: Methodology

Postby Sc00bz » Wed Feb 18, 2009 2:44 am

95 character up to length 9 is about where you should develop ASICs to generate the tables (3700, 9800 GTX+ card-months of work for 99.8% success rate). Length 8 is 39, 9800 GTX+ card-months of work and about 1 TiB of data for 99.8% success rate.
Sc00bz
 
Posts: 93
Joined: Thu Jan 22, 2009 9:31 pm

Re: Methodology

Postby Bitweasil » Wed Feb 18, 2009 11:55 pm

Sc00bz wrote:95 character up to length 9 is about where you should develop ASICs to generate the tables (3700, 9800 GTX+ card-months of work for 99.8% success rate). Length 8 is 39, 9800 GTX+ card-months of work and about 1 TiB of data for 99.8% success rate.


3700 GPU-months is about 300 GPU-years. That's about where I put the effort as well. I think that's completely feasible with a large distributed project. I'm also planning to release a CPU client for those without GPUs who want to contribute (significantly slower, but 1000+ CPU cores is still useful). Also, a CPU client allows me to make use of some of the spare cycles I have access to - 200+ machines, many Celeron 2.6ghz class though increasing numbers of Core 2 Duo/Quad. Together, they're a significant amount of IOPS, especially with SSE optimizations.
Bitweasil
Site Admin
 
Posts: 912
Joined: Tue Jan 20, 2009 4:26 pm

Next

Return to GPU Rainbow Tables

Who is online

Users browsing this forum: No registered users and 1 guest

cron