New design paradigm discussion: Massively Multiprocess

Discussion and support for the CUDA Multiforcers (Windows and Linux)
  • Ads

Re: New design paradigm discussion: Massively Multiprocess

Postby Bitweasil » Tue May 19, 2009 1:07 pm

I think, initially, the focus should be on the simple client/server modular brute forcers.

I see the benefits of a distributed model, but I think we should focus on something more directly useful first. As neinbrucke pointed out, there are a lot of people who would prefer (actually, require) that the hashes they are testing not leave their immediate control. A distributed model would be useless for them. Most security professionals who are testing a company's passwords for security can't let those hashes go roaming around the internet.

Once the server and clients are written, it would not be difficult to write a more advanced server that implemented a distributed system. The clients can directly communicate to this server which handles all the distribution.

Also, as long as the API involves enough fields to handle authentication, this can be used for a internet-wide cracking system. Again, either directly connect the clients to the server or, more likely, write a small server daemon that proxies requests for the central server.

I think the client-server model gives us the most flexibility, without the need to rewrite/recode things. The clients will remain useful, and the servers can be easily modified as they aren't doing any actual heavy lifting. The servers could also be written in something easier to handle strings with - PHP, Python, etc would be perfectly suitable languages for servers.

I'll work on getting a developer environment set up with a wiki & ticket/roadmap tracking.
Bitweasil
Site Admin
 
Posts: 912
Joined: Tue Jan 20, 2009 4:26 pm

Re: New design paradigm discussion: Massively Multiprocess

Postby foobar2342 » Tue May 19, 2009 1:25 pm

vampyr wrote:
Which sounds awesome, in theory, but in practice leads to a few concerns:
1: reliability. If say half the nodes are lost, would the network still function? What if a node and it's backup are lost? Surely not having the tables for those functions would impair the network's ability to find plaintexts.
2: As you need to send a request to EVERY node this way (if each node contains it's own table) traffic is indeed an issue.
3: Storage? I do not know many people willing to spare say up to 100gb of disk space on their private machines.
4: Checking? Who knows if the table the node has is corrupted. Certainly as your not storing the table on a central server, there is essentially no way of doing this. Yes you could salt with a set of random hashes, but rainbow tables aren't guaranteed to contain those.


1. if half of your nodes die, then you find a plaintext with less probability. so you have to make more tables
to get some redundancy.
2. don't you have to send each hash to each node anyway? how can a bruteforcer crack a hash without knowing
which to crack.
3. 100gb disk space cost $10. i dont think you will be generating 100gb though. that takes a lot of time.
4. you dont even have to care about such a malicious client as you can simply check the answer with a single hash function
invocation. the only kind of DOS that can be done is taking requests and never sending results. or sending obviously wrong
results. of course you cannot depend on someone *claiming* to have a table, you can only verify that after getting a number
of results.
foobar2342
 
Posts: 17
Joined: Sun Apr 05, 2009 7:41 pm

Re: New design paradigm discussion: Massively Multiprocess

Postby Sc00bz » Tue May 19, 2009 1:26 pm

vampyr wrote:1: reliability. If say half the nodes are lost, would the network still function? What if a node and it's backup are lost? Surely not having the tables for those functions would impair the network's ability to find plaintexts.
2: As you need to send a request to EVERY node this way (if each node contains it's own table) traffic is indeed an issue.
3: Storage? I do not know many people willing to spare say up to 100gb of disk space on their private machines.
4: Checking? Who knows if the table the node has is corrupted. Certainly as your not storing the table on a central server, there is essentially no way of doing this. Yes you could salt with a set of random hashes, but rainbow tables aren't guaranteed to contain those.

1. Seeded with a central server or trusted super nodes.
2. You only need to send the end points that need to be looked up to the peers that the end point falls in their bucket. Also redundancy would be good for error checking.
3. 100GB * 300 users / 3 redundancy = 10 TB That's a lot of rainbow tables.
4. Use hashes just like torrents. "Yes you could salt with a set of random hashes, but rainbow tables aren't guaranteed to contain those." Huh?

The only problem is when generating if a bucket gets lost or only one person has it (possibly compromised) then that will effect the end result. Or you just add trusted super nodes or a centralized server to receive copies of the generated tables. Once it's done generating you generate hashes for each bucket and if there's a corrupted bucket the node grabs it from the server or super nodes.

haha boo you beat me
Sc00bz
 
Posts: 93
Joined: Thu Jan 22, 2009 9:31 pm

Re: New design paradigm discussion: Massively Multiprocess

Postby Sc00bz » Tue May 19, 2009 1:30 pm

foobar2342 wrote:2. don't you have to send each hash to each node anyway? how can a bruteforcer crack a hash without knowing which to crack.

Then each node would need to generate all the end points to look up on their own which will be expensive.
Sc00bz
 
Posts: 93
Joined: Thu Jan 22, 2009 9:31 pm

Re: New design paradigm discussion: Massively Multiprocess

Postby foobar2342 » Tue May 19, 2009 1:34 pm

Bitweasil wrote:I think, initially, the focus should be on the simple client/server modular brute forcers.

I see the benefits of a distributed model, but I think we should focus on something more directly useful first. As neinbrucke pointed out, there are a lot of people who would prefer (actually, require) that the hashes they are testing not leave their immediate control. A distributed model would be useless for them. Most security professionals who are testing a company's passwords for security can't let those hashes go roaming around the internet.

Once the server and clients are written, it would not be difficult to write a more advanced server that implemented a distributed system. The clients can directly communicate to this server which handles all the distribution.

Also, as long as the API involves enough fields to handle authentication, this can be used for a internet-wide cracking system. Again, either directly connect the clients to the server or, more likely, write a small server daemon that proxies requests for the central server.

I think the client-server model gives us the most flexibility, without the need to rewrite/recode things. The clients will remain useful, and the servers can be easily modified as they aren't doing any actual heavy lifting. The servers could also be written in something easier to handle strings with - PHP, Python, etc would be perfectly suitable languages for servers.

I'll work on getting a developer environment set up with a wiki & ticket/roadmap tracking.


if someone audits a corporate network and finds a vulnerable password that password gets changed anyway.
only the super-paranoid admin fears the time gap between the internet knowing that *somewhere* there is a
password x and that password being changed.
bruteforcing is still a waste of resources. once you have searched the keyspace, you have to start all over again.
after 2 runs over the whole keyspace you have done the same amount of work as somebody building tables, only
that you do not have tables then. also you can check hashes against the values that you get during the table
generation process, so you can start right away.
foobar2342
 
Posts: 17
Joined: Sun Apr 05, 2009 7:41 pm

Re: New design paradigm discussion: Massively Multiprocess

Postby foobar2342 » Tue May 19, 2009 1:37 pm

Sc00bz wrote:
foobar2342 wrote:2. don't you have to send each hash to each node anyway? how can a bruteforcer crack a hash without knowing which to crack.

Then each node would need to generate all the end points to look up on their own which will be expensive.


i dont understand what you mean. rainbow table endpoints?
foobar2342
 
Posts: 17
Joined: Sun Apr 05, 2009 7:41 pm

Re: New design paradigm discussion: Massively Multiprocess

Postby Sc00bz » Tue May 19, 2009 1:48 pm

rainbow table endpoints

On a side note I think you're talking about non-perfect rainbow tables. Which are bigger and slower but take less time to generate.

There is one thing if you have like 10,000 hashes to crack you can do that faster with brute forcing than with rainbow tables. The cost of adding one more hash when brute forcing is very small, but when adding one more hash for rainbow tables you still need to generate all the end points to look up which is separate from all the previous hashes.
Sc00bz
 
Posts: 93
Joined: Thu Jan 22, 2009 9:31 pm

Re: New design paradigm discussion: Massively Multiprocess

Postby Bitweasil » Tue May 19, 2009 1:54 pm

foobar2342 wrote:if someone audits a corporate network and finds a vulnerable password that password gets changed anyway.
only the super-paranoid admin fears the time gap between the internet knowing that *somewhere* there is a
password x and that password being changed.
bruteforcing is still a waste of resources. once you have searched the keyspace, you have to start all over again.
after 2 runs over the whole keyspace you have done the same amount of work as somebody building tables, only
that you do not have tables then. also you can check hashes against the values that you get during the table
generation process, so you can start right away.


I've yet to talk to a security professional who is interested in submitting company hashes to the internet. Your point is moderately valid, but they still don't want to do it. Generally, a pentester only runs shorter attacks, not long running attacks - something not crackable in 3 days is decently secure, but may fall to a week+ long attack.

Also, brute forcing is highly effective on large hash lists, as there is a VERY high constant time requirement for rainbow tables - for each hash, you need to generate all the candidate hashes, which is time consuming on the order of 45 minutes per hash per table, for len500k tables, with GPU acceleration. It's worth it for long passwords, but for most shorter stuff, brute forcing large lists of unsalted hashes is significantly faster.
Bitweasil
Site Admin
 
Posts: 912
Joined: Tue Jan 20, 2009 4:26 pm

Re: New design paradigm discussion: Massively Multiprocess

Postby foobar2342 » Tue May 19, 2009 2:21 pm

Sc00bz wrote:rainbow table endpoints

On a side note I think you're talking about non-perfect rainbow tables. Which are bigger and slower but take less time to generate.

There is one thing if you have like 10,000 hashes to crack you can do that faster with brute forcing than with rainbow tables. The cost of adding one more hash when brute forcing is very small, but when adding one more hash for rainbow tables you still need to generate all the end points to look up which is separate from all the previous hashes.


each client would build a perfect rainbow table (with perfect meaning there are no chain merges).

regarding your second point, i agree that there is a certain number of hashes that would make bruteforcers more
efficient. what would be the actual number of hashes? what is the size of the keyspace we are talking about?
foobar2342
 
Posts: 17
Joined: Sun Apr 05, 2009 7:41 pm

Re: New design paradigm discussion: Massively Multiprocess

Postby vampyr » Tue May 19, 2009 3:04 pm

Given that i can search the 1-8 alphanumeric keyspace in a few hours on half of my computing power (2*4870x2), i'd say bruteforce 1-7 characters (or 1-8 for large amounths of hashes of the same type) and use rainbow tables for 7-8-9.
vampyr
 
Posts: 9
Joined: Mon May 18, 2009 11:23 am

PreviousNext

Return to CUDA Multiforcers

Who is online

Users browsing this forum: No registered users and 1 guest

cron