Bitcoin Forum
December 11, 2024, 06:47:05 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: Non-bitcoin cryptography question  (Read 2327 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
DannyHamilton (OP)
Legendary
*
Offline Offline

Activity: 3514
Merit: 4894



View Profile
February 08, 2016, 10:22:30 PM
Last edit: February 08, 2016, 10:42:38 PM by DannyHamilton
 #1

Not sure if this is the right sub-forum for this thread, but I couldn't find a better place and when I asked in Meta, this is where I was directed to...

Lets say I want to receive a large collection of data from a data provider on a regular basis (perhaps daily).

We've both generated asymmetric keys and are certain that we have each other's public key (exchanged it in person).

They will be encrypting the data with my public key so that the only two entities that have control over who sees that data are themselves and me.

They will also be signing a SHA256 hash of the data with their private key each time they provide it so that I can be certain that the data I receive is actually from them and not an impostor.

So far, so good.

Now, lets say I'm authorized to de-crypt the data, break it up into smaller chunks (according to a set of rules), sign each chunk with my own private key, and re-encrypt each of these smaller chunks of data with the public keys of people with whom I'll be sharing these sub-sets of data.

If I've got this right, I can verify that the data I received was from the sender, and the final recipients can all verify that the data they receive is actually from me.  Each recipient of the data I send can only see the data that has been encrypted with their public key and can't see any of the data that I provide to anyone else (unless that other entity or the originating entity chooses to share it).

Great, now, here's the problem I'm trying to solve and I'm wondering if there are any cryptographic techniques (tricks?) that can help.

Lets say the data provider sends me some sort of bad data.  I never look at the contents of the data and have no way of knowing if it's accurate or not even if I did.  I simply pass the bad data on in the sub-sets that I send out.  The recipient of a bad sub-set incurs a significant loss due to decisions made based on this bad data. They contact (sue?) the data provider to attempt to get compensated for their loss.

Under the process that I've described so far, if I don't store all the data that the data provider ever sends me in perpetuity, the provider could theoretically claim that they gave me the correct data and that some bug in my process must have introduced that bad data.  If they falsify their source data, they might even be successful in hiding the fact that they were the source of bad data.  Unless I can prove that the data I sent was actually the data that I recieved, I could end up looking like the party that messed up.

tl;dr
So, if the provider doesn't know how the data will be broken up into subsets, is there any way I can store proof that the subset data came from the source data without needing to store all the source data itself?

Knowing too little about cryptography to know if my following example even makes sense, my thought was that maybe there'd be some other hash instead of SHA256 that I could use. Something where I could ask the provider to sign an XYZ hash value of the full set before sending it to me, and then as long as I store an XYZ hash of each of the subsets that I create, it could be proven that my XYZ hash was based on data that was also included in the providers XYZ hash.

If the provider knew how the subsets were going to be created, then I think a merkle tree of SHA256 hashes would work.  The merkle root being the representation of all the data, with a separate SHA256 of each sub-set. However, I don't have the ability to go back and request the signature AFTER I've broken up the data, and the provider won't know the rules I'm using to break it up.

So, I think I'm looking for something similar to a merkle tree without the creator of the "root" knowing exactly how the branches will be broken up, although I'm open to any other concepts.
xqus
Full Member
***
Offline Offline

Activity: 172
Merit: 100



View Profile
February 09, 2016, 08:30:47 AM
 #2

I think it would be difficult. As long as they sign the whole set of data with one signature, and you break it up in parts before you sign it again.

Is it possible that you collect the data back from the receipts and then assemble it again, and then you can prove that it was bad when you received it?

If not I think you will need to store all the data, and prove it that way.

As far as I know there is no way to prove that for example 4 hashes would have to origin from signature X, or hash Y.

Maybe you could create a HMAC signature of the original data, signed with the hash of the chunked data. And then include that hash, along with some meta data and send that (signed) to both parts.
Then to prove it was not altered you would receive the original data, chunk it like you did, and prove that the signature matches, and that the chunked data was not altered.

PGP fingerprint: B17233A1 || Bitrated user: xqus ≡ Free trust agent || LocalBitcoins ≡ Buy bitcoins locally
Wallet and Exchange security ≡ A security overview of wallets and exchanges. (forum thread)
CIYAM
Legendary
*
Offline Offline

Activity: 1890
Merit: 1086


Ian Knowles - CIYAM Lead Developer


View Profile WWW
February 09, 2016, 08:41:08 AM
Last edit: February 09, 2016, 08:55:10 AM by CIYAM
 #3

Let me see if I've understood the problem correctly:

Alice provides X amount of data which is encrypted and sent to
Brian who then takes a decrypted portion Y of this data and encrypts that and sends it to
Cathy who wants to know whether or not Alice really did send this data Y to Brian in the first place?

Assuming that I have understood this then approach that I would take would involve Alice and Cathy communicating (if you do not wish that to occur then I think it is going to be very tricky and especially if the data is being broken up arbitrarily).

If Brian gave you some meta data (such as the offset in X that Y comes from) then without revealing the data Cathy could send this meta data to Alice (along with the length of Y) who could then send back a hash of Y which Cathy verifies.

With CIYAM anyone can create 100% generated C++ web applications in literally minutes.

GPG Public Key | 1ciyam3htJit1feGa26p2wQ4aw6KFTejU
piotr_n
Legendary
*
Offline Offline

Activity: 2058
Merit: 1416


aka tonikt


View Profile WWW
February 09, 2016, 09:00:50 AM
 #4

tl;dr
So, if the provider doesn't know how the data will be broken up into subsets, is there any way I can store proof that the subset data came from the source data without needing to store all the source data itself?

Make the provider to sign each N-KB block of the data - publish these signatures.
Then split the data using (a multiple of) the N-KB sizes.

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
DannyHamilton (OP)
Legendary
*
Offline Offline

Activity: 3514
Merit: 4894



View Profile
February 09, 2016, 01:57:20 PM
 #5

I thought that I explained the issue pretty well in my OP, but I'm getting answers that don't meet the requirements.  Perhaps I didn't do as good of a job explaining as I thought I did.

I really don't think there is a solution where I don't need to store the original data, but I just wanted to make sure.

Here's why the 3 most recent suggestions don't meet my needs:

- snip -
Then to prove it was not altered you would receive the original data, chunk it like you did, and prove that the signature matches
- snip -

Yes, I'm already aware that I can accomplish what I'm trying to do if I have access to all the original data.  That's going to be a huge amount of data. If I need to store the original data forever, that's going to be a bit of a hassle.  I'm trying to figure out if there's a way to store less.

I *think* if the data provider knew ahead of time (or if I could commnicate back to them) how the subset would be created, then I could just have them generate a merkle tree and sign the root. However, at the moment I'm unlikely to be able to have them generate that for me.

Alice provides X amount of data which is encrypted and sent to
Brian who then takes a decrypted portion Y of this data and encrypts that and sends it to
Cathy who wants to know whether or not Alice really did send this data Y to Brian in the first place?

That sounds about right.  In this case, I'm Brian.  I want to be able to prove to Cathy, Alice, and anyone else that the data I gave Cathy was received in a larger set from Alice.

Assuming that I have understood this then approach that I would take would involve Alice and Cathy communicating (if you do not wish that to occur then I think it is going to be very tricky and especially if the data is being broken up arbitrarily).

The data is being broken into subsets according to a set of rules.  The rules are different for each type of data received, but they are always consistent for that type of data. So it isn't entirely arbitrary, but Alice doesn't know or care what the rules are for creating the subsets. She won't be breaking it up for me, that's what I'm supposed to be doing.


If Brian gave you

I am Brian in this example, so he doesn't need to "give me" anything.

some meta data (such as the offset in X that Y comes from) then without revealing the data Cathy could send this meta data to Alice (along with the length of Y) who could then send back a hash of Y which Cathy verifies.

I'm trying to protect Brain against Alice lying.  In your example Alice could fix the data on her end, and then send a hash of the fixed data. Cathy would think that Brian altered the data since the data she received doesn't match the data that Alice claims she sent to Brian.  Brian needs to be able to prove that Alice changed the data.

Make the provider to sign each N-KB block of the data - publish these signatures.
Then split the data using (a multiple of) the N-KB sizes.

There is a set of rules that need to be followed when providing the subsets. I can't count on those rules always resulting in alignment with N-KB blocks. The data provider doesn't know or care what the rules are for creating the subsets. They won't be breaking it up for me, that's what I'm supposed to be doing.
CIYAM
Legendary
*
Offline Offline

Activity: 1890
Merit: 1086


Ian Knowles - CIYAM Lead Developer


View Profile WWW
February 09, 2016, 02:10:13 PM
 #6

I'm trying to protect Brain against Alice lying.  In your example Alice could fix the data on her end, and then send a hash of the fixed data. Cathy would think that Brian altered the data since the data she received doesn't match the data that Alice claims she sent to Brian.  Brian needs to be able to prove that Alice changed the data.

Hmm... what I think you'd need to have happen is that Alice would need to be given the information about how the data was divided up and then sign a merkle tree of that information (which basically means that Alice is also doing the dividing up the same way to check the Merkle tree, however, assuming the work that determined the dividing up of the data up was much harder than processing a list of offsets then perhaps that isn't too much of a problem as Alice has all the data in the first place).

If you don't want Alice to keep the data after broadcasting it to Brian then I think you're probably SOL but if Alice can keep the information just long enough to sign off on the Merkle tree then you should be okay.

With CIYAM anyone can create 100% generated C++ web applications in literally minutes.

GPG Public Key | 1ciyam3htJit1feGa26p2wQ4aw6KFTejU
piotr_n
Legendary
*
Offline Offline

Activity: 2058
Merit: 1416


aka tonikt


View Profile WWW
February 09, 2016, 02:33:35 PM
 #7

Make the provider to sign each N-KB block of the data - publish these signatures.
Then split the data using (a multiple of) the N-KB sizes.

There is a set of rules that need to be followed when providing the subsets. I can't count on those rules always resulting in alignment with N-KB blocks. The data provider doesn't know or care what the rules are for creating the subsets. They won't be breaking it up for me, that's what I'm supposed to be doing.

OK.. so you need to
1) get a big file created by the source party and signed with the source's private key.
2) cut it into pieces, signing each of the pieces with your private key, then distribute these signatures, as well as the content of the pieces.
3) figure out how a third party that would receive any of these pieces can make sure that it represents an actual piece of the original file, while not trusting you, nor having an access to the rest of the pieces

and all that without any cooperation from the source party...?

well, good luck with that!

IMHO, it can't be done.

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
DannyHamilton (OP)
Legendary
*
Offline Offline

Activity: 3514
Merit: 4894



View Profile
February 09, 2016, 05:10:14 PM
Last edit: February 09, 2016, 05:22:52 PM by DannyHamilton
 #8

- snip -
If you don't want Alice to keep the data after broadcasting it to Brian then I think you're probably SOL
- snip -

- snip -
well, good luck with that!

IMHO, it can't be done.

That's what I figured.  I personally couldn't think of a way that it could be done, but thought it was worth asking in case there was something I wasn't aware of.

My only hope was there was some sort of function that I was unaware of such that:

Given:
DataSet = DataSet_A + DataSet_B + DataSet_C + DataSet_D

We could say that:
Func(DataSet) = Func(DataSet_A) + Func(DataSet_B) + Func(DataSet_C) + Func(DataSet_D)

Where:
The size of Func(X) is significantly less than X
Func(X) results in the same output everytime you use the same input
When X and Y are not identical, there is an extremely low chance that Func(X) would ever have the same result as Func(Y)

Notice that the "Where" section sounds exactly like a cryptographically strong hash, but I'd need a hash function that also had the attribute listed under "We could say that".

I can think of a very weak "hash" that meets the needs of the first part, but it isn't strong enough for the second part:
  • Convert the data to a binary representation
  • Add up all the 1's in the dataset
  • Sign the resulting value

Now if you do the same for the subsets, you'll find that I only need to store the quantity of 1's in each subset. I can now show that the sum of the signed quantities of 1's in the subsets is equal to the quantity of 1's that the data provider signed.

Unfortunately, this weak example is useless because it's to easy to move 1's around in the data instead of adding or removing them. Which means it fails the "When X and Y are not identical, there is an extremely low chance that Func(X) would ever have the same result as Func(Y)" test.

I need something better, but I don't know if something better exists.  I've just got this nagging feeling in the back of my head that I saw something like what I want back in the early 90's. I'm probably worng and thinking of something else, but had to ask just in case.
spartacusrex
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
February 10, 2016, 11:34:47 AM
 #9

What about some kind of homomorphic hash functon ?

Hash(X+Y) = Hash(X) + Hash(Y) (Doesn't Paillier do something like this.. ? Or whatever hash function gmax uses in his CT.)

Then Alice signs the Hash(X+Y+...) and you can show that the sum of all your little hashed pieces, adds up to the hash of the big piece. Which is signed by Alice.

Life is Code.
DannyHamilton (OP)
Legendary
*
Offline Offline

Activity: 3514
Merit: 4894



View Profile
February 10, 2016, 02:07:58 PM
 #10

What about some kind of homomorphic hash functon ?

Hash(X+Y) = Hash(X) + Hash(Y) (Doesn't Paillier do something like this.. ? Or whatever hash function gmax uses in his CT.)

Then Alice signs the Hash(X+Y+...) and you can show that the sum of all your little hashed pieces, adds up to the hash of the big piece. Which is signed by Alice.

This is the reason I started this thread.

I've not heard of "Pallier", but I'm going to go Google it right now.

I believe that what I'm looking for is exactly what you're describing where:
Hash(X+Y+...) = Hash(X) + Hash(Y) + Hash(...)

I didn't know if such a hash existed, but if it does I think that would be exactly what I need.

By the way, when you say gmax, do you mean gmaxwell?  And what is a CT?
(Honestly, I was really hoping that gmaxwell would see and take an interest in this thread. I suspect that if there is a solution to my problem, he knows exactly what it is.)
spartacusrex
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
February 10, 2016, 04:29:22 PM
Last edit: February 10, 2016, 04:57:27 PM by spartacusrex
 #11

By the way, when you say gmax, do you mean gmaxwell?  And what is a CT?
(Honestly, I was really hoping that gmaxwell would see and take an interest in this thread. I suspect that if there is a solution to my problem, he knows exactly what it is.)

Yep.

CT is Confidential Transactions. It's his system for showing that the sum of the inputs of a txn add up to the sum of the outputs, without revealing any of the values. He's implemented it in his Elements sidechain.

https://bitcointalk.org/index.php?topic=1085273.0

And the bit that I thought might be helpful to you is :

Quote
..
A Pedersen commitment works like the above but with an additional property: commitments can be added, and the sum of a set of commitments is the same as a commitment to the sum of the data (with a blinding key set as the sum of the blinding keys):

  C(BF1, data1) + C(BF2, data2) == C(BF1 + BF2, data1 + data2) C(BF1, data1) - C(BF1, data1) == 0

In other words, the commitment preserves addition and the commutative property applies.

If data_n = {1,1,2} and BF_n = {5,10,15} then:

  C(BF1, data1) + C(BF2, data2) - C(BF3, data3) == 0

and so on.

..

Yes he would. He's bad ass.

Life is Code.
OnkelPaul
Legendary
*
Offline Offline

Activity: 1039
Merit: 1005



View Profile
February 11, 2016, 09:28:06 PM
 #12

What about the following approach:
Assume that you split the data into relatively few chunks without reordering parts, and that the original source of the data computed and signed a SHA-256 hash over the whole data.
Then you could run SHA-256 over the whole dataset, recording the intermediate state of the hashing algorithm wherever you split the data.
SHA-256 runs on 512-bit blocks, so if you split at a block boundary, you only need to note the values in the hashing registers at that point.
If you split the data within a block, you need to note the hashing register values before the block, plus the 512 bit block itself.
Each data recipient receives the states before and after his block in addition to the block data, so they can check whether their pieces are ok. They need to provide you with a signed statement about this fact.
Now you only have to archive these signed statements and the block boundary states of the hashing engine. Since the last boundary state is equal to the hash signed by the original author, these pieces together can be used to prove that you did not tamper with the data.

I see two possible issues with this approach, though:
1. You would disclose a little more of the data to each recipient since you have to cut at 512-bit boundaries. I don't know whether that's a problem, or whether you can nudge your data source to deliver the data in 512-bit chunks, for example by padding each record to a multiple of 64 bytes.
2. Data recipients could claim that you colluded with the recipient of the last block so that he signs an untrue statement about the final hash value. Since the data integrity is only proven when the final hash of the last block is equal to the hash signed by the original data source, this might be a problem. It is also possible to collude with a recipient that got an earlier blocks, but you could cheat only on recipients before the one you colluded with.

I don't know whether there's an easy way of fixing the last issue, and maybe there are more issues than I see at the moment :-(

Onkel Paul

spartacusrex
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
February 12, 2016, 01:53:51 PM
 #13

Onkel - I like.

2. Data recipients could claim that you colluded with the recipient of the last block so that he signs an untrue statement about the final hash value. Since the data integrity is only proven when the final hash of the last block is equal to the hash signed by the original data source..

How about, if he kept the complete last block of data, as well as the last signed statement, so it would not be possible for him to cheat. Since he can show that it does actually hash to the final hash which is signed by Alice. And he can't change that block of data without performing a successful pre-image attack.

Actually I'm not sure how much of the final block of data he would need to keep. Maybe just the final 64 bytes. Since that alone would also require a pre-image attack to break?

Life is Code.
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1134


View Profile WWW
February 17, 2016, 07:34:56 AM
 #14

I think the hashes need to have some sort of algebraic structure and not just be random, so Pederson commitments seem to be part of the solution.

The part that I am unclear on is if you are not storing the data, but at some point need to provide all the data, that seems mutually exclusive.

But if you just need to be able to prove you didnt change the data, then by showing that your hashes add up to zero and it matches the combined hash from the data source, this seems possible

The data source would need to sign the combined hash.

http://crypto.stackexchange.com/questions/11923/difference-between-pedersen-commitment-and-commitment-based-on-elgamal?rq=1

Also, it seems unrelated, but the maths behind the Dining Cryptographers might be helpful.

James

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1134


View Profile WWW
February 17, 2016, 08:36:41 PM
 #15

I think the following is a relatively simple way to achieve your goals (at least pretty close to it). It requires the data provider to know how you are splitting up the data, at least at some level of granularity. Each of these blocks of data cant be changed, but you could mix and match any combination of the granular block.

So if you have a terabyte total data, then say 1MB is the smallest block. you would need 1 million hashes, but with a 30,000:1 reduction in size, maybe it is acceptable. It is a tradeoff decision that doesnt affect the calculation. The larger the smallest block (lets call it a sblock) the less flexibility to distribute it, the smaller the sblock the more flexibility, but also more elements need to be stored.

The data creator will need to know the size of the sblock and apply the same calculations as you do and then sign it so you have proof that is what was delivered.

For each sblock worth of data (we assume the last block is zerofilled) do SHA256 hash and then treat that as a curve25519 field element. Five bits will need to be hard coded, but there are still 251 bits and odds of collision are extremely small.

The following code assumes a 64bit CPU and should be close to a working sblocks calculation:

Code:
/******************************************************************************
 * Copyright © 2014-2016 The SuperNET Developers.                             *
 *                                                                            *
 * See the AUTHORS, DEVELOPER-AGREEMENT and LICENSE files at                  *
 * the top-level directory of this distribution for the individual copyright  *
 * holder information and the developer policies on copyright and licensing.  *
 *                                                                            *
 * Unless otherwise agreed in a custom licensing agreement, no part of the    *
 * SuperNET software, including this file may be copied, modified, propagated *
 * or distributed except according to the terms contained in the LICENSE file *
 *                                                                            *
 * Removal or modification of this copyright notice is prohibited.            *
 *                                                                            *
 ******************************************************************************/

#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <memory.h>

struct sha256_vstate { uint64_t length; uint32_t state[8],curlen; uint8_t buf[64]; };

union _bits256 { uint8_t bytes[32]; uint16_t ushorts[16]; uint32_t uints[8]; uint64_t ulongs[4]; uint64_t txid; };
typedef union _bits256 bits256;

union _bits320 { uint8_t bytes[40]; uint16_t ushorts[20]; uint32_t uints[10]; uint64_t ulongs[5]; uint64_t txid; };
typedef union _bits320 bits320;

// special gcc mode for 128-bit integers
typedef unsigned uint128_t __attribute__((mode(TI)));

// sha256 is ported from libtom, the curve25519 is refactored donna_curve25519.c

#define STORE32L(x, y)                                                                     \
{ (y)[3] = (uint8_t)(((x)>>24)&255); (y)[2] = (uint8_t)(((x)>>16)&255);   \
(y)[1] = (uint8_t)(((x)>>8)&255); (y)[0] = (uint8_t)((x)&255); }

#define LOAD32L(x, y)                            \
{ x = (uint32_t)(((uint64_t)((y)[3] & 255)<<24) | \
((uint32_t)((y)[2] & 255)<<16) | \
((uint32_t)((y)[1] & 255)<<8)  | \
((uint32_t)((y)[0] & 255))); }

#define STORE64L(x, y)                                                                     \
{ (y)[7] = (uint8_t)(((x)>>56)&255); (y)[6] = (uint8_t)(((x)>>48)&255);   \
(y)[5] = (uint8_t)(((x)>>40)&255); (y)[4] = (uint8_t)(((x)>>32)&255);   \
(y)[3] = (uint8_t)(((x)>>24)&255); (y)[2] = (uint8_t)(((x)>>16)&255);   \
(y)[1] = (uint8_t)(((x)>>8)&255); (y)[0] = (uint8_t)((x)&255); }

#define LOAD64L(x, y)                                                       \
{ x = (((uint64_t)((y)[7] & 255))<<56)|(((uint64_t)((y)[6] & 255))<<48)| \
(((uint64_t)((y)[5] & 255))<<40)|(((uint64_t)((y)[4] & 255))<<32)| \
(((uint64_t)((y)[3] & 255))<<24)|(((uint64_t)((y)[2] & 255))<<16)| \
(((uint64_t)((y)[1] & 255))<<8)|(((uint64_t)((y)[0] & 255))); }

#define STORE32H(x, y)                                                                     \
{ (y)[0] = (uint8_t)(((x)>>24)&255); (y)[1] = (uint8_t)(((x)>>16)&255);   \
(y)[2] = (uint8_t)(((x)>>8)&255); (y)[3] = (uint8_t)((x)&255); }

#define LOAD32H(x, y)                            \
{ x = (uint32_t)(((uint64_t)((y)[0] & 255)<<24) | \
((uint32_t)((y)[1] & 255)<<16) | \
((uint32_t)((y)[2] & 255)<<8)  | \
((uint32_t)((y)[3] & 255))); }

#define STORE64H(x, y)                                                                     \
{ (y)[0] = (uint8_t)(((x)>>56)&255); (y)[1] = (uint8_t)(((x)>>48)&255);     \
(y)[2] = (uint8_t)(((x)>>40)&255); (y)[3] = (uint8_t)(((x)>>32)&255);     \
(y)[4] = (uint8_t)(((x)>>24)&255); (y)[5] = (uint8_t)(((x)>>16)&255);     \
(y)[6] = (uint8_t)(((x)>>8)&255); (y)[7] = (uint8_t)((x)&255); }

#define LOAD64H(x, y)                                                      \
{ x = (((uint64_t)((y)[0] & 255))<<56)|(((uint64_t)((y)[1] & 255))<<48) | \
(((uint64_t)((y)[2] & 255))<<40)|(((uint64_t)((y)[3] & 255))<<32) | \
(((uint64_t)((y)[4] & 255))<<24)|(((uint64_t)((y)[5] & 255))<<16) | \
(((uint64_t)((y)[6] & 255))<<8)|(((uint64_t)((y)[7] & 255))); }

// Various logical functions
#define RORc(x, y) ( ((((uint32_t)(x)&0xFFFFFFFFUL)>>(uint32_t)((y)&31)) | ((uint32_t)(x)<<(uint32_t)(32-((y)&31)))) & 0xFFFFFFFFUL)
#define Ch(x,y,z)       (z ^ (x & (y ^ z)))
#define Maj(x,y,z)      (((x | y) & z) | (x & y))
#define S(x, n)         RORc((x),(n))
#define R(x, n)         (((x)&0xFFFFFFFFUL)>>(n))
#define Sigma0(x)       (S(x, 2) ^ S(x, 13) ^ S(x, 22))
#define Sigma1(x)       (S(x, 6) ^ S(x, 11) ^ S(x, 25))
#define Gamma0(x)       (S(x, 7) ^ S(x, 18) ^ R(x, 3))
#define Gamma1(x)       (S(x, 17) ^ S(x, 19) ^ R(x, 10))
#define MIN(x, y) ( ((x)<(y))?(x):(y) )

int32_t sha256_vcompress(struct sha256_vstate * md,uint8_t *buf)
{
    uint32_t S[8],W[64],t0,t1,i;
    for (i=0; i<8; i++) // copy state into S
        S[i] = md->state[i];
    for (i=0; i<16; i++) // copy the state into 512-bits into W[0..15]
        LOAD32H(W[i],buf + (4*i));
    for (i=16; i<64; i++) // fill W[16..63]
        W[i] = Gamma1(W[i - 2]) + W[i - 7] + Gamma0(W[i - 15]) + W[i - 16];
    
#define RND(a,b,c,d,e,f,g,h,i,ki)                    \
t0 = h + Sigma1(e) + Ch(e, f, g) + ki + W[i];   \
t1 = Sigma0(a) + Maj(a, b, c);                  \
d += t0;                                        \
h  = t0 + t1;
    
    RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],0,0x428a2f98);
    RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],1,0x71374491);
    RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],2,0xb5c0fbcf);
    RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],3,0xe9b5dba5);
    RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],4,0x3956c25b);
    RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],5,0x59f111f1);
    RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],6,0x923f82a4);
    RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],7,0xab1c5ed5);
    RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],8,0xd807aa98);
    RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],9,0x12835b01);
    RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],10,0x243185be);
    RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],11,0x550c7dc3);
    RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],12,0x72be5d74);
    RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],13,0x80deb1fe);
    RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],14,0x9bdc06a7);
    RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],15,0xc19bf174);
    RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],16,0xe49b69c1);
    RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],17,0xefbe4786);
    RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],18,0x0fc19dc6);
    RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],19,0x240ca1cc);
    RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],20,0x2de92c6f);
    RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],21,0x4a7484aa);
    RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],22,0x5cb0a9dc);
    RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],23,0x76f988da);
    RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],24,0x983e5152);
    RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],25,0xa831c66d);
    RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],26,0xb00327c8);
    RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],27,0xbf597fc7);
    RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],28,0xc6e00bf3);
    RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],29,0xd5a79147);
    RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],30,0x06ca6351);
    RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],31,0x14292967);
    RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],32,0x27b70a85);
    RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],33,0x2e1b2138);
    RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],34,0x4d2c6dfc);
    RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],35,0x53380d13);
    RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],36,0x650a7354);
    RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],37,0x766a0abb);
    RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],38,0x81c2c92e);
    RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],39,0x92722c85);
    RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],40,0xa2bfe8a1);
    RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],41,0xa81a664b);
    RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],42,0xc24b8b70);
    RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],43,0xc76c51a3);
    RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],44,0xd192e819);
    RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],45,0xd6990624);
    RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],46,0xf40e3585);
    RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],47,0x106aa070);
    RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],48,0x19a4c116);
    RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],49,0x1e376c08);
    RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],50,0x2748774c);
    RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],51,0x34b0bcb5);
    RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],52,0x391c0cb3);
    RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],53,0x4ed8aa4a);
    RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],54,0x5b9cca4f);
    RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],55,0x682e6ff3);
    RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],56,0x748f82ee);
    RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],57,0x78a5636f);
    RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],58,0x84c87814);
    RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],59,0x8cc70208);
    RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],60,0x90befffa);
    RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],61,0xa4506ceb);
    RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],62,0xbef9a3f7);
    RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],63,0xc67178f2);
#undef RND
    for (i=0; i<8; i++) // feedback
        md->state[i] = md->state[i] + S[i];
    return(0);
}

#undef RORc
#undef Ch
#undef Maj
#undef S
#undef R
#undef Sigma0
#undef Sigma1
#undef Gamma0
#undef Gamma1

void sha256_vinit(struct sha256_vstate * md)
{
    md->curlen = 0;
    md->length = 0;
    md->state[0] = 0x6A09E667UL;
    md->state[1] = 0xBB67AE85UL;
    md->state[2] = 0x3C6EF372UL;
    md->state[3] = 0xA54FF53AUL;
    md->state[4] = 0x510E527FUL;
    md->state[5] = 0x9B05688CUL;
    md->state[6] = 0x1F83D9ABUL;
    md->state[7] = 0x5BE0CD19UL;
}

int32_t sha256_vprocess(struct sha256_vstate *md,const uint8_t *in,uint64_t inlen)
{
    uint64_t n; int32_t err;
    if ( md->curlen > sizeof(md->buf) )
        return(-1);
    while ( inlen > 0 )
    {
        if ( md->curlen == 0 && inlen >= 64 )
        {
            if ( (err= sha256_vcompress(md,(uint8_t *)in)) != 0 )
                return(err);
            md->length += 64 * 8, in += 64, inlen -= 64;
        }
        else
        {
            n = MIN(inlen,64 - md->curlen);
            memcpy(md->buf + md->curlen,in,(size_t)n);
            md->curlen += n, in += n, inlen -= n;
            if ( md->curlen == 64 )
            {
                if ( (err= sha256_vcompress(md,md->buf)) != 0 )
                    return(err);
                md->length += 8*64;
                md->curlen = 0;
            }
        }
    }
    return(0);
}

int32_t sha256_vdone(struct sha256_vstate *md,uint8_t *out)
{
    int32_t i;
    if ( md->curlen >= sizeof(md->buf) )
        return(-1);
    md->length += md->curlen * 8; // increase the length of the message
    md->buf[md->curlen++] = (uint8_t)0x80; // append the '1' bit
    // if len > 56 bytes we append zeros then compress.  Then we can fall back to padding zeros and length encoding like normal.
    if ( md->curlen > 56 )
    {
        while ( md->curlen < 64 )
            md->buf[md->curlen++] = (uint8_t)0;
        sha256_vcompress(md,md->buf);
        md->curlen = 0;
    }
    while ( md->curlen < 56 ) // pad upto 56 bytes of zeroes
        md->buf[md->curlen++] = (uint8_t)0;
    STORE64H(md->length,md->buf+56); // store length
    sha256_vcompress(md,md->buf);
    for (i=0; i<8; i++) // copy output
        STORE32H(md->state[i],out+(4*i));
    return(0);
}

void store_limb(uint8_t *out,uint64_t in)
{
    int32_t i;
    for (i=0; i<8; i++,in>>=8)
        out[i] = (in & 0xff);
}

uint64_t load_limb(uint8_t *in)
{
    return
    ((uint64_t)in[0]) |
    (((uint64_t)in[1]) << 8) |
    (((uint64_t)in[2]) << 16) |
    (((uint64_t)in[3]) << 24) |
    (((uint64_t)in[4]) << 32) |
    (((uint64_t)in[5]) << 40) |
    (((uint64_t)in[6]) << 48) |
    (((uint64_t)in[7]) << 56);
}

// Take a little-endian, 32-byte number and expand it into polynomial form
bits320 fexpand(bits256 basepoint)
{
    bits320 out;
    out.ulongs[0] = load_limb(basepoint.bytes) & 0x7ffffffffffffLL;
    out.ulongs[1] = (load_limb(basepoint.bytes+6) >> 3) & 0x7ffffffffffffLL;
    out.ulongs[2] = (load_limb(basepoint.bytes+12) >> 6) & 0x7ffffffffffffLL;
    out.ulongs[3] = (load_limb(basepoint.bytes+19) >> 1) & 0x7ffffffffffffLL;
    out.ulongs[4] = (load_limb(basepoint.bytes+24) >> 12) & 0x7ffffffffffffLL;
    return(out);
}

void fcontract_iter(uint128_t t[5],int32_t flag)
{
    int32_t i; uint64_t mask = 0x7ffffffffffffLL;
    for (i=0; i<4; i++)
        t[i+1] += t[i] >> 51, t[i] &= mask;
    if ( flag != 0 )
        t[0] += 19 * (t[4] >> 51); t[4] &= mask;
}

// Take a fully reduced polynomial form number and contract it into a little-endian, 32-byte array
bits256 fcontract(const bits320 input)
{
    uint128_t t[5]; int32_t i; bits256 out;
    for (i=0; i<5; i++)
        t[i] = input.ulongs[i];
    fcontract_iter(t,1), fcontract_iter(t,1);
    // donna: now t is between 0 and 2^255-1, properly carried.
    // donna: case 1: between 0 and 2^255-20. case 2: between 2^255-19 and 2^255-1.
    t[0] += 19, fcontract_iter(t,1);
    // now between 19 and 2^255-1 in both cases, and offset by 19.
    t[0] += 0x8000000000000 - 19;
    for (i=1; i<5; i++)
        t[i] += 0x8000000000000 - 1;
    // now between 2^255 and 2^256-20, and offset by 2^255.
    fcontract_iter(t,0);
    store_limb(out.bytes,t[0] | (t[1] << 51));
    store_limb(out.bytes+8,(t[1] >> 13) | (t[2] << 38));
    store_limb(out.bytes+16,(t[2] >> 26) | (t[3] << 25));
    store_limb(out.bytes+24,(t[3] >> 39) | (t[4] << 12));
    return(out);
}

// Multiply two numbers: output = in2 * in
// output must be distinct to both inputs. The inputs are reduced coefficient form, the output is not.
// Assumes that in[i] < 2**55 and likewise for in2. On return, output[i] < 2**52
bits320 fmul(const bits320 in2,const bits320 in)
{
    uint128_t t[5]; uint64_t r0,r1,r2,r3,r4,s0,s1,s2,s3,s4,c; bits320 out;
    r0 = in.ulongs[0], r1 = in.ulongs[1], r2 = in.ulongs[2], r3 = in.ulongs[3], r4 = in.ulongs[4];
    s0 = in2.ulongs[0], s1 = in2.ulongs[1], s2 = in2.ulongs[2], s3 = in2.ulongs[3], s4 = in2.ulongs[4];
    t[0]  =  ((uint128_t) r0) * s0;
    t[1]  =  ((uint128_t) r0) * s1 + ((uint128_t) r1) * s0;
    t[2]  =  ((uint128_t) r0) * s2 + ((uint128_t) r2) * s0 + ((uint128_t) r1) * s1;
    t[3]  =  ((uint128_t) r0) * s3 + ((uint128_t) r3) * s0 + ((uint128_t) r1) * s2 + ((uint128_t) r2) * s1;
    t[4]  =  ((uint128_t) r0) * s4 + ((uint128_t) r4) * s0 + ((uint128_t) r3) * s1 + ((uint128_t) r1) * s3 + ((uint128_t) r2) * s2;
    r4 *= 19, r1 *= 19, r2 *= 19, r3 *= 19;
    t[0] += ((uint128_t) r4) * s1 + ((uint128_t) r1) * s4 + ((uint128_t) r2) * s3 + ((uint128_t) r3) * s2;
    t[1] += ((uint128_t) r4) * s2 + ((uint128_t) r2) * s4 + ((uint128_t) r3) * s3;
    t[2] += ((uint128_t) r4) * s3 + ((uint128_t) r3) * s4;
    t[3] += ((uint128_t) r4) * s4;
    r0 = (uint64_t)t[0] & 0x7ffffffffffffLL; c = (uint64_t)(t[0] >> 51);
    t[1] += c;      r1 = (uint64_t)t[1] & 0x7ffffffffffffLL; c = (uint64_t)(t[1] >> 51);
    t[2] += c;      r2 = (uint64_t)t[2] & 0x7ffffffffffffLL; c = (uint64_t)(t[2] >> 51);
    t[3] += c;      r3 = (uint64_t)t[3] & 0x7ffffffffffffLL; c = (uint64_t)(t[3] >> 51);
    t[4] += c;      r4 = (uint64_t)t[4] & 0x7ffffffffffffLL; c = (uint64_t)(t[4] >> 51);
    r0 +=   c * 19; c = r0 >> 51; r0 = r0 & 0x7ffffffffffffLL;
    r1 +=   c;      c = r1 >> 51; r1 = r1 & 0x7ffffffffffffLL;
    r2 +=   c;
    out.ulongs[0] = r0, out.ulongs[1] = r1, out.ulongs[2] = r2, out.ulongs[3] = r3, out.ulongs[4] = r4;
    return(out);
}

int main(int argc,const char * argv[])
{
    int32_t sblocksize,len; uint8_t *buf; bits320 prod; bits256 tmp,hash;
    struct sha256_vstate md; FILE *fp,*sblocks;
    if ( argc < 2 )
        printf("usage: %s datafile [sblocksize]\n",argv[0]);
    else if ( argc > 3 && (sblocksize= atoi(argv[2])) < 32 )
        printf("usage: %s datafile [sblocksize] # min sblocksize is 32\n",argv[0]);
    else if ( (fp= fopen(argv[1],"rb")) == 0 )
        printf("usage: %s datafile [sblocksize] # cant find %s\n",argv[0],argv[1]);
    else
    {
        if ( (sblocks= fopen("sblocks","wb")) == 0 )
        {
            printf("error creating sblocks output file\n");
            exit(-1);
        }
        if ( sblocksize == 0 )
            sblocksize = 1024*1024;
        printf("calculating sblocks for %s\n",argv[1]);
        memset(tmp.bytes,0,sizeof(tmp)), tmp.bytes[0] = 9;
        prod = fexpand(tmp); // start with valid field element. might want to use the unit element
        buf = calloc(1,sblocksize);
        while ( (len= (int32_t)fread(buf,1,sblocksize,fp)) > 0 )
        {
            sha256_vinit(&md), sha256_vprocess(&md,buf,sblocksize), sha256_vdone(&md,hash.bytes);
            fwrite(hash.bytes,1,sizeof(hash),sblocks); // save sha256 of sblock
            hash.bytes[0] &= 0xf8, hash.bytes[31] &= 0x7f, hash.bytes[31] |= 0x40;
            prod = fmul(prod,fexpand(hash)); // multiple field element
            memset(buf,0,sblocksize);
        }
        tmp = fcontract(prod);
        fwrite(tmp.bytes,1,sizeof(tmp),sblocks); // this is the value of all the data
        free(buf);
        fclose(fp);
        fclose(sblocks);
        return(0);
    }
    return(-1);
}


I didnt have a chance to verify it, but all it is doing is calculating SHA256 hash of each sblock, mapping it to a field element and multiplying it for a combined value. The field operations work at the 320 bit size so the 256 bit hash needs to be expanded and contracted.

Since it is a purely field multiplication, it has no other protections, signatures and whatnot will need to be added on top of this. Note that it is unordered as the fmul is just a pure multiply, if you need it to be ordered, then just include the sequence number into the sha256 calculation.

It is totally self contained portable C so save to a file, then gcc it to make the executable. Output is an sblocks file that has the corresponding hash for each sblock along with the total at the end.

Hopefully it works for what you need. It is just quick proof of concept code, but if any bit is changed in any of the sblocks, the hash will change which will change the combined product. By committing to the sblock hashes, you can prove that you didnt corrupt the data you got as long as the data provider also runs the same program and signs it. Any subset that is delivered can be verified independently to match your posted set of sblock hashes.

I think that satisfies your requirement?

James

P.S. If chance of collision (around 1 in 2^120) is too high, then a second hash can be used to create two elements per sblock. Odds of two different hash collisions with the same data is vanishingly small.

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 17, 2016, 08:49:41 PM
 #16

Isn't the simple solution to include the signature of the entire document with each chunk you forward. If the signer claims he provided different text, he will need to produce a document which matches the hash of what he signed, but he can't do that and also lie.

There are usually simpler solutions. Just think out of the box and paradigm shift the problem.

jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1134


View Profile WWW
February 17, 2016, 09:02:57 PM
 #17

Isn't the simple solution to include the signature of the entire document with each chunk you forward. If the signer claims he provided different text, he will need to produce a document which matches the hash of what he signed, but he can't do that and also lie.

There are usually simpler solutions. Just think out of the box and paradigm shift the problem.
I think the issue is that the originator of the data might be antagonistic and not cooperate and put the burden on the intermediary, but it is a good idea to include the signature in the hash of each sblock.

I think just using field multiplication of N different hashes for each sblock will allow to achieve whatever confidence level about collisions, at the cost of more space for this though. Using curve25519 field multiplication is VERY fast. Probably will do more than a million per second on a single core. The limitation will probably be HDD speeds loading the data or the sha256 calcs

James

P.S. Not sure it can get much simpler than 300 lines of C code...

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 17, 2016, 09:08:40 PM
 #18

Okay then jl777. Another way to do it is have the originator sign a hash which is a root of a Merkel tree. Then you can break the document into words or even individual characters if you want. And you can send any portions of the document and prove the originator signed those.

jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1134


View Profile WWW
February 17, 2016, 09:20:24 PM
 #19

Okay then jl777. Another way to do it is have the originator sign a hash which is a root of a Merkel tree. Then you can break the document into words or even individual characters if you want. And you can send any portions of the document and prove the originator signed those.
merkle tree adds complication, not much but compared to 300 lines, measurable. The 300 lines includes the SHA256 and the field multiplication.

but with a merkle tree not sure if it satisfies the requirements of this problem. The originator can be assumed to be non-cooperative and the only one with the entire dataset. If the data is molested I would think we would need to determine if it was the intermediary or the final recipient that modified the data.

How can the intermediary who doesnt have the data use a merkle tree to verify that the final recipient has the right sblock? Since the merkle tree only verifies that all leaves are correct, it seems all we find out is that the data was corrupted, but not who corrupted it. Maybe that is enough if all parties are having signed protocol when the sblocks are delivered.

anyway, interesting problem. need feedback from the OP whether requirements are met. Lunch break is over, back to debugging I go

James

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 17, 2016, 09:27:09 PM
Last edit: February 17, 2016, 10:09:12 PM by TPTB_need_war
 #20

jl777, if the originator signed the root hash of the Merkel tree when he provided it to the intermediary, then the intermediary can prove that any fragment(s) of the document was signed by the originator. The originator is the one who breaks the document up into words or characters at the leaves. A Merkel tree is a very efficient structure space wise if the granularity is very high.

I suppose yours of deterministic mapping each hash to a field element and multiplying all together (and the originator signs the product) is more space efficient, but it is I think roughly an order-of-magnitude slower. Why not add instead of multiply since adding is much faster (one assumes hashes are difficult to preimage)?

Correction: the Merkel tree is also more space efficient (as well as being faster), because the intermediary doesn't specify the hashes for all the leaves when writing a proof.

Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!