BurtW
Legendary
Offline
Activity: 2646
Merit: 1138
All paid signature campaigns should be banned.
|
|
September 06, 2013, 04:31:47 PM |
|
For 100 megs, if you want to split fine hairs, Aahzman, the final crypto key would like (not exactly like but something) like this: (xxx.yyyyyyyyyy.zzz) Where each other those three points are required to tell the decoding engine how to work out the details and solve the block. Since only one crypto key would be needed, whatever amount of data that is above (3 x's 10 y's and 3 z's = 16 characters x 7bits = 112 bytes) ... so maybe for 100 megs, only 112 bytes would be needed. If you compress a 100 megabyte file down to 112 bytes then that would be really cool. I have only one small question for you: If I change one byte in the 100 megabyte file would the xxx.yyyyyyyyyy.zzz crypto key change? I am going to assume it must because you need to be able to recover the two different (different by only one byte) 100 megabyte files, right?
|
Our family was terrorized by Homeland Security. Read all about it here: http://www.jmwagner.com/ and http://www.burtw.com/ Any donations to help us recover from the $300,000 in legal fees and forced donations to the Federal Asset Forfeiture slush fund are greatly appreciated!
|
|
|
B(asic)Miner (OP)
Newbie
Offline
Activity: 28
Merit: 0
|
|
September 06, 2013, 05:09:46 PM |
|
For 100 megs, if you want to split fine hairs, Aahzman, the final crypto key would like (not exactly like but something) like this: (xxx.yyyyyyyyyy.zzz) Where each other those three points are required to tell the decoding engine how to work out the details and solve the block. Since only one crypto key would be needed, whatever amount of data that is above (3 x's 10 y's and 3 z's = 16 characters x 7bits = 112 bytes) ... so maybe for 100 megs, only 112 bytes would be needed. If you compress a 100 megabyte file down to 112 bytes then that would be really cool. I have only one small question for you: If I change one byte in the 100 megabyte file would the xxx.yyyyyyyyyy.zzz crypto key change? I am going to assume it must because you need to be able to recover the two different (different by only one byte) 100 megabyte files, right? Hi, Burt, after reading your resume you sent me, I almost fell out of my chair. You not only have every qualification needed to do my project, you also enjoy knowledge of working with videocamera microprocessors. And since one part of the theory is that once we get this working, we will be able to record video directly into the crypto code format at under certain size limits (say 1 GB) and anything over will be a tad more complex, but only a tad. I am very interested now in sharing my theory with you. You must have worked with NDA documents before. Let's discuss contact information in a PM post and get started on sharing this with you. I will of course record my documents in the crypto block chain using Proof of Existence first. Then we can chat via Skype. You will have to help me formulate the correct terminology to use in describing this with you. I tried before with one of my best friends (who was a computer programmer for Gas Powered Games) but my "creative terminology" insulted his multi-talented logical brain and he couldn't "get me" ... I don't want that to happen again. PS: If you change even 1 BIT anywhere in the 100 megabyte file from your question, then yes, the crypto key changes to something else. It's a unique DNA-like signature of the entire contents of the file, so every last bit in necessary to decode it, making it truly lossless. I can almost feel from your question that you are already close to understanding where I am going with this (and that both exhilerates me and frightens me hahaha). Spndr7, I would need a sort of mini version of my encoding program to be able to answer your question regarding the 96-bytes of binary data. I can't do it in my head, and I can't do it on paper with that much data. I can do about 10 bytes, but even that would take like 12 hours of work by hand. This kind of work is exactly why computers were invented in the first place. The good news is that I know how to program in basic, and with the support you are all giving me, perhaps I should try to program just the encoder part so that I can answer your question to a certain level, say to at least 1024 bytes of data. So I can share with the people who are interested and sign the NDA's and wish to help me.
|
|
|
|
kslavik
Sr. Member
Offline
Activity: 441
Merit: 250
GET IN - Smart Ticket Protocol - Live in market!
|
|
September 06, 2013, 05:15:51 PM |
|
Hi, Burt, after reading your resume you sent me, I almost fell out of my chair. You not only have every qualification needed to do my project, you also enjoy knowledge of working with videocamera microprocessors. And since one part of the theory is that once we get this working, we will be able to record video directly into the crypto code format at under certain size limits (say 1 GB) and anything over will be a tad more complex, but only a tad. I am very interested now in sharing my theory with you. You must have worked with NDA documents before. Let's discuss contact information in a PM post and get started on sharing this with you. I will of course record my documents in the crypto block chain using Proof of Existence first. Then we can chat via Skype. You will have to help me formulate the correct terminology to use in describing this with you. I tried before with one of my best friends (who was a computer programmer for Gas Powered Games) but my "creative terminology" insulted his multi-talented logical brain and he couldn't "get me" ... I don't want that to happen again.
PS: If you change even 1 BIT anywhere in the 100 megabyte file from your question, then yes, the crypto key changes to something else. It's a unique DNA-like signature of the entire contents of the file, so every last bit in necessary to decode it, making it truly lossless. I can almost feel from your question that you are already close to understanding where I am going with this (and that both exhilerates me and frightens me hahaha).
Spndr7, I would need a sort of mini version of my encoding program to be able to answer your question regarding the 96-bytes of binary data. I can't do it in my head, and I can't do it on paper with that much data. I can do about 10 bytes, but even that would take like 12 hours of work by hand. This kind of work is exactly why computers were invented in the first place.
Would you share how long it would take you to do it on paper with 9 bytes, 8 bytes, 11 bytes. I want to see how complexity would grow with increased data sets. Edit: and how big the result would be for those cases (8,9,10,11 bytes)
|
████ ███ ███ ████ ███ ███ ███ ███ ████ ███ ███ ███ ███ ███ ███ ████ ███ ███ ██ ███ ███ █████████████████ ███ ███ ███ ██ ███ ███ ██ ██ ███ ██████████ ███ ███ ██████ ███ ███ ██ ███ ███ ███ ███ ███ ███ ███ ████
| | GUTS | | ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███
| | smart-ticket protocol for events ✔ live product with market traction! | | ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███
| | ▶ BTC ANN ▶ WEBSITE ▶ BLOG
| | ▶ SANDBOX ▶ WHITEPAPER ▶ BOUNTY
| |
|
|
|
B(asic)Miner (OP)
Newbie
Offline
Activity: 28
Merit: 0
|
|
September 06, 2013, 05:26:59 PM |
|
Hi, Burt, after reading your resume you sent me, I almost fell out of my chair. You not only have every qualification needed to do my project, you also enjoy knowledge of working with videocamera microprocessors. And since one part of the theory is that once we get this working, we will be able to record video directly into the crypto code format at under certain size limits (say 1 GB) and anything over will be a tad more complex, but only a tad. I am very interested now in sharing my theory with you. You must have worked with NDA documents before. Let's discuss contact information in a PM post and get started on sharing this with you. I will of course record my documents in the crypto block chain using Proof of Existence first. Then we can chat via Skype. You will have to help me formulate the correct terminology to use in describing this with you. I tried before with one of my best friends (who was a computer programmer for Gas Powered Games) but my "creative terminology" insulted his multi-talented logical brain and he couldn't "get me" ... I don't want that to happen again.
PS: If you change even 1 BIT anywhere in the 100 megabyte file from your question, then yes, the crypto key changes to something else. It's a unique DNA-like signature of the entire contents of the file, so every last bit in necessary to decode it, making it truly lossless. I can almost feel from your question that you are already close to understanding where I am going with this (and that both exhilerates me and frightens me hahaha).
Spndr7, I would need a sort of mini version of my encoding program to be able to answer your question regarding the 96-bytes of binary data. I can't do it in my head, and I can't do it on paper with that much data. I can do about 10 bytes, but even that would take like 12 hours of work by hand. This kind of work is exactly why computers were invented in the first place.
Would you share how long it would take you to do it on paper with 9 bytes, 8 bytes, 11 bytes. I want to see how complexity would grow with increased data sets. Edit: and how big the result would be for those cases (8,9,10,11 bytes) Time scale by Hand to get a Crypto Code: Approximately 30 minutes for each byte due to double checking. If even one number is done wrong along the way, the whole process is ruined. Double and triple checking is a must. Around 6 or 7 bytes into the process, you begin to get paranoid you've made an error, because by hand, it's easy to do. Sometimes you have to start all the way over, and go all the way through to 6 and 7 again. In that case, 1 hour for each byte. My friend and I spent hours just checking and rechecking every digit in our trial runs. It was fun, but exhausting. And that was back when I hadn't optimized by theory down to a few rules. It had so many rules in the beginning, it was impossible to understand. But I knew what I was trying to do, I just hadn't asked the right questions yet. Anyway, the double checking along the way is why I said it would take so long to do 10 bytes by hand. If done by a computer, the encoding would take 1/1000ths of millisecond (I'm joking guys, don't write flames over this number, it's just some comedy relief at this point.) Thanks for your interest, guys.
|
|
|
|
phillipsjk
Legendary
Offline
Activity: 1008
Merit: 1001
Let the chips fall where they may.
|
|
September 06, 2013, 05:30:53 PM Last edit: September 06, 2013, 05:51:10 PM by phillipsjk |
|
Numbers like what my compression ratio could be once the software is completed is pointless now. All I can say is, for large data, like 100 Gigabytes, that kind of guess on my part is really just a guess (99.8%) at this point, and is not worth the getting all wound up over. For a file the size of the Hutter contest, 100 megs, I am fairly certain the brute force method can still apply, so internal file splitting to achieve multiple crypto keys would not be needed for a file that small. Thus, I am certain I could achieve 100% compression of that file, if you want to call it compression. It's not really compression as society knows it today. It's a way of hiding the data and then knowing how to bring it back from a known natural container. Bringing it back requires brute force methods, as far as I know. For 100 megs, if you want to split fine hairs, Aahzman, the final crypto key would like (not exactly like but something) like this: (xxx.yyyyyyyyyy.zzz) Where each other those three points are required to tell the decoding engine how to work out the details and solve the block. Since only one crypto key would be needed, whatever amount of data that is above (3 x's 10 y's and 3 z's = 16 characters x 7bits = 112 bytes) ... so maybe for 100 megs, only 112 bytes would be needed. But for larger sizes, I couldn't yet say until it was explored by my team.
Thanks again.
So you are guessing. If you read between the lines in Rationale for a Large Text Compression Benchmark, you will see they have an estimate of how much compression they expect is possible. The estimated entropy of the text, together with the pigeon hole principle (mentioned in this thread), puts an upper bound on the maximum compression that is achievable. That page estimates about an 8:1 compression ratio; assuming 8 bit characters (Wikipedia uses UTF-8). They even include a proof that a program can not be any shorter the length of the shortest possible program is not computable.: The entropy of this and all compression benchmarks is unknown. Unfortunately there is no direct way to compute it. In the absence of a known probability distribution, we may define the information content, or Kolmogorov complexity K(s), of a string s as the length of the shortest program that outputs s [11]. K(s) is independent of the language used to write the program, up to a constant independent of s, because any program written in language L1 can be rewritten in L2 by appending a compiler for L1 written in L2.
Kolmogorov also proved that K(s) is not computable. The proof is simple. Suppose that K(s) is computable. Then you could write a program to find the first string s whose complexity is at least n bits, for any n as follows:
s := "" while K(s) < n do s := next(s) // in some lexicographical ordering output s
Now let n = 10000. The above program is shorter than n bits, but it outputs s and K(s) = 10000, which is a contradiction. This proof can be applied to any language by making n sufficiently large.
I have seen cryptographic hash functions described as "compression" in the documentation for Freenet. The SHA-256 hash of the test file in that contest is: 2b49720ec4d78c3c9fabaee6e4179a5e997302b3a70029f30f2d582218c024a8 enwik8 However, that hash is not sufficient to reconstruct the original file: even with brute-force. Leaving aside the calculations that brute-forcing sha-256 would take more energy than is in the known universe, it is still not enough. The difficulty is that SHA-256 gives you a string of a fixed length. As with all hash functions, there exists an infinite number of inputs that will give you the same hash. So, even if you use "brute force" to find an input that gives you the same hash: chances are high that it won't actually match the original. You can try to improve the odds by making sure all of your guesses are the correct size. The pigeon hole principle says that even if you generate 100MB of noise: you only have to change 32 bytes in the file to generate a collision.
|
James' OpenPGP public key fingerprint: EB14 9E5B F80C 1F2D 3EBE 0A2F B3DE 81FF 7B9D 5160
|
|
|
B(asic)Miner (OP)
Newbie
Offline
Activity: 28
Merit: 0
|
|
September 06, 2013, 05:44:42 PM |
|
Edit: and how big the result would be for those cases (8,9,10,11 bytes)
Here is the size for 1 byte .... (x.yyy.z) Here is the size for 5 bytes: (x.yyyy.z) Here is an intriguing thing about this theory. If we get it working, for files that use no splitting internally (maybe 500 MB or less) there would only be one crypto key. If you could remember it in your head, you wouldn't even need any file! Our program will have a crypto key direct entry mode. You type in the crypto code from memory and your file is restored. Why? For total legal security of your documents! Under American law, if you go through the airport, anything you carry with you like your computer can be legally opened and searched. If you put your iphone inside of a metal box with a key, that key can be demanded by the government because its physical. But the law at this time won't allow the government to force you to give the contents of your mind. So if you use a lock that requires a password, they can't make you give them the password by law. Imagine zipping your top secret documents and then compressing them using our software. Let's say your files are 450 megs, and our software's cut-off point for a single crypto key is 500 MBs. So you get your crypto key, easily memorize it, and now you are carrying the entire 450MB file IN YOUR MIND. Its like Johnny Mnemonic, only better, because all that data isn't actually in your mind! When you go through the airport, you don't even need your computer, the top secret files are in your mind, safe from badguys, safe from the law should they try to use it against you. So then you get to where you are going, download our software onto any computer you want, and use the "crypto key direct entry" mode and type in your crypto key following the format (x.yyyyy.z) and your document is recreated out of the ether for you! It sounds like science fiction, but it just makes me feel exhilerated beyond words! I can't wait to see this work. It will be a true marvel.
|
|
|
|
murraypaul
|
|
September 06, 2013, 05:46:43 PM |
|
PS: If you change even 1 BIT anywhere in the 100 megabyte file from your question, then yes, the crypto key changes to something else. It's a unique DNA-like signature of the entire contents of the file, so every last bit in necessary to decode it, making it truly lossless. I can almost feel from your question that you are already close to understanding where I am going with this (and that both exhilerates me and frightens me hahaha). In that case, the average amount of space taken up with your crypto key will be the same as the average size of the files you are compressing. That is just a simple fact. There are 2^32 possible files of 32 binary bits length, they are all unique, and therefore they cannot all be indexed by less than 2^32 bits of information. Let me guess, your method is something along the lines of computing some sort of hash of the file, and then to decompress, you hash every possible random block of the same length until you find one which matches the hash, and hey presto, you've reconstructed your original file? That doesn't work, as there are many many different files which would all give the same hash value.
|
BTC: 16TgAGdiTSsTWSsBDphebNJCFr1NT78xFW SRC: scefi1XMhq91n3oF5FrE3HqddVvvCZP9KB
|
|
|
B(asic)Miner (OP)
Newbie
Offline
Activity: 28
Merit: 0
|
|
September 06, 2013, 05:55:01 PM |
|
Numbers like what my compression ratio could be once the software is completed is pointless now. All I can say is, for large data, like 100 Gigabytes, that kind of guess on my part is really just a guess (99.8%) at this point, and is not worth the getting all wound up over. For a file the size of the Hutter contest, 100 megs, I am fairly certain the brute force method can still apply, so internal file splitting to achieve multiple crypto keys would not be needed for a file that small. Thus, I am certain I could achieve 100% compression of that file, if you want to call it compression. It's not really compression as society knows it today. It's a way of hiding the data and then knowing how to bring it back from a known natural container. Bringing it back requires brute force methods, as far as I know. For 100 megs, if you want to split fine hairs, Aahzman, the final crypto key would like (not exactly like but something) like this: (xxx.yyyyyyyyyy.zzz) Where each other those three points are required to tell the decoding engine how to work out the details and solve the block. Since only one crypto key would be needed, whatever amount of data that is above (3 x's 10 y's and 3 z's = 16 characters x 7bits = 112 bytes) ... so maybe for 100 megs, only 112 bytes would be needed. But for larger sizes, I couldn't yet say until it was explored by my team.
Thanks again.
So you are guessing. If you read between the lines in Rationale for a Large Text Compression Benchmark, you will see they have an estimate of how much compression they expect is possible. The estimated entropy of the text, together with the pigeon hole principle (mentioned in this thread), puts an upper bound on the maximum compression that is achievable. That page estimates about an 8:1 compression ratio; assuming 8 bit characters (Wikipedia uses UTF-8). They even include a proof that a program can not be any shorter the length of the shortest possible program is not computable.: The entropy of this and all compression benchmarks is unknown. Unfortunately there is no direct way to compute it. In the absence of a known probability distribution, we may define the information content, or Kolmogorov complexity K(s), of a string s as the length of the shortest program that outputs s [11]. K(s) is independent of the language used to write the program, up to a constant independent of s, because any program written in language L1 can be rewritten in L2 by appending a compiler for L1 written in L2.
Kolmogorov also proved that K(s) is not computable. The proof is simple. Suppose that K(s) is computable. Then you could write a program to find the first string s whose complexity is at least n bits, for any n as follows:
s := "" while K(s) < n do s := next(s) // in some lexicographical ordering output s
Now let n = 10000. The above program is shorter than n bits, but it outputs s and K(s) = 10000, which is a contradiction. This proof can be applied to any language by making n sufficiently large.
I have seen cryptographic hash functions described as "compression" in the documentation for Freenet. The SHA-256 hash of the test file in that contest is: 2b49720ec4d78c3c9fabaee6e4179a5e997302b3a70029f30f2d582218c024a8 enwik8 However, that hash is not sufficient to reconstruct the original file: even with brute-force. Leaving aside the calculations that brute-forcing sha-256 would take more energy than is in the know universe, it is still not enough. The difficulty is that SHA-256 gives you a string of a fixed length. As with all hash functions, there exists an infinite number of inputs that will give you the same hash. So, even if you use "brute force" to find an input that gives you the same hash: chances are high that it won't actually match the original. You can try to improve the odds by making sure all of your guesses are the correct size. The pigeon hole principle says that even if you generate 100MB of noise: you only have to change 32 bytes in the file to generate a collision. I believe you, and I know you know what you are talking about. But the flaw in all of this is calling what I am doing "compression" ... it is not the changing of the information into a smaller form using a system that throws out predictable information and then lets it get added back at decompression. What I am doing is copying the contents of the file into a virtual container, thus the entire file is actually still there. I don't wish to have to do this because others will again be calling me a kook, but I fear I must. Imagine a virtual space (like say another dimension) as seen for example in the Harry Potter movies where the girl's hat always held whatever she needed and she could pull it out at will. It still existed in its original form but was hiding in another dimension. That would be a more likely description of what I am doing here. Shoving the file into virtual reality that exists in nature, if you know how to look at nature that way. The code you get is a 100% logical operand that can be traced back to the original file source. Once that is solved, the entire "blockchain" as it were IS the file itself, hidden in this virtual container. That's why talking about compression schemes cannot begin to rationalize this. You have to think outside of the box totally on this one. It's an entirely new thing. That's why I doubt those behind Hutter's prize would actually pay out for this ... because what they are trying to get the world to do is create smarter compression engines that beat the current ones in a mathematically smart way, so that intelligence can contribute to the world. I would still like to try, but my feeling is when they see how I did this, they will disqualify me to win that contest, even though the achievement itself would be quite remarkable. Plus, I am not sure I can get this to work with less than 2 to 3 GB or RAM, which would also disqualify this. And, there is no way to know if what I am proposing will not require an infinite length of time to decompress the file. That also has to be addressed first before we would know how good this is. Again, thanks guys.
|
|
|
|
minzie
|
|
September 06, 2013, 06:00:00 PM |
|
But the law at this time won't allow the government to force you to give the contents of your mind. A judge certainly can order you to give it up with just cause, and can jail you for contempt of court if you fail to comply. So maybe not 'force', but they can find enough other reasons to make your life miserable that you might be compelled to give it up.
|
|
|
|
B(asic)Miner (OP)
Newbie
Offline
Activity: 28
Merit: 0
|
|
September 06, 2013, 06:04:32 PM |
|
PS: If you change even 1 BIT anywhere in the 100 megabyte file from your question, then yes, the crypto key changes to something else. It's a unique DNA-like signature of the entire contents of the file, so every last bit in necessary to decode it, making it truly lossless. I can almost feel from your question that you are already close to understanding where I am going with this (and that both exhilerates me and frightens me hahaha). In that case, the average amount of space taken up with your crypto key will be the same as the average size of the files you are compressing. That is just a simple fact. There are 2^32 possible files of 32 binary bits length, they are all unique, and therefore they cannot all be indexed by less than 2^32 bits of information. Let me guess, your method is something along the lines of computing some sort of hash of the file, and then to decompress, you hash every possible random block of the same length until you find one which matches the hash, and hey presto, you've reconstructed your original file? That doesn't work, as there are many many different files which would all give the same hash value. My goal is to compress large files, not small 10k files. The smallest file I would even want to compress with this theory would be 1-2 megabyte(s). Small enough to compress some jpeg images into a zip file then take that and compress it through my system into a 10k file to send to your friends. I don't want to use this compress small text files, that would be senseless. I'm talking about using this to encode 50 GB videogames for sending over the Playstation/Xbox networks at virtually no download cost or speed. I'm talking about sending Blueray movies from subscribers to movie websites which offer movie rental, where the true Blueray (full-sized at 50GB) would encrypt to under 500Kbs for quick download by customers who still had sub-Megabit internet speeds. I'm talking about people hoarding their video collections in 500 MB containers and being able to go to a friend's house, type in the direct crypto key, retrieve their stashes out of thin air wherever they go to show their friends. The world will be fundamentally changed should we be able to pull this off. If not, I'll be the biggest laughingstock who ever lived, I suppose. It's a risk I have to take, though. Because I like cool things and this would be a lot of fun to see borne.
|
|
|
|
kslavik
Sr. Member
Offline
Activity: 441
Merit: 250
GET IN - Smart Ticket Protocol - Live in market!
|
|
September 06, 2013, 06:21:13 PM |
|
Well, first you have to prove mathematically that your method works for small data sets. The main premise for you to prove is that multiple files (data sets) wouldn't results into the same sequence of xx.yy.zz. It would be easy to prove:
Just calculate it for all the numbers from 1 to let say 2^32 and see if there are any collisions. And if the resulting string of the 4 bytes (2^32) sequence is less than 4 bytes, you will have collisions due to the pigeon hole principle mentioned above.
|
████ ███ ███ ████ ███ ███ ███ ███ ████ ███ ███ ███ ███ ███ ███ ████ ███ ███ ██ ███ ███ █████████████████ ███ ███ ███ ██ ███ ███ ██ ██ ███ ██████████ ███ ███ ██████ ███ ███ ██ ███ ███ ███ ███ ███ ███ ███ ████
| | GUTS | | ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███
| | smart-ticket protocol for events ✔ live product with market traction! | | ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███
| | ▶ BTC ANN ▶ WEBSITE ▶ BLOG
| | ▶ SANDBOX ▶ WHITEPAPER ▶ BOUNTY
| |
|
|
|
tuckerblane
Newbie
Offline
Activity: 9
Merit: 0
|
|
September 06, 2013, 07:40:03 PM |
|
Here is an intriguing thing about this theory. If we get it working, for files that use no splitting internally (maybe 500 MB or less) there would only be one crypto key. If you could remember it in your head, you wouldn't even need any file! Our program will have a crypto key direct entry mode. You type in the crypto code from memory and your file is restored. Why? For total legal security of your documents!
Under American law, if you go through the airport, anything you carry with you like your computer can be legally opened and searched. If you put your iphone inside of a metal box with a key, that key can be demanded by the government because its physical. But the law at this time won't allow the government to force you to give the contents of your mind. So if you use a lock that requires a password, they can't make you give them the password by law. Imagine zipping your top secret documents and then compressing them using our software. Let's say your files are 450 megs, and our software's cut-off point for a single crypto key is 500 MBs. So you get your crypto key, easily memorize it, and now you are carrying the entire 450MB file IN YOUR MIND. Its like Johnny Mnemonic, only better, because all that data isn't actually in your mind! When you go through the airport, you don't even need your computer, the top secret files are in your mind, safe from badguys, safe from the law should they try to use it against you.
So then you get to where you are going, download our software onto any computer you want, and use the "crypto key direct entry" mode and type in your crypto key following the format (x.yyyyy.z) and your document is recreated out of the ether for you!
It sounds like science fiction, but it just makes me feel exhilerated beyond words! I can't wait to see this work. It will be a true marvel.
That is science fiction. I download your program, zip and encrypt my 487MB files, and memorize the crypto key of xxx.yyyyyyyyy.zz. That's now all on my laptop. It's safe. It's secure. It's not going anywhere. When I reach my destination, I cannot simply download your program to any computer and type in my key. The information is stored on my laptop 1500 miles away. Not on your server. And if my laptop is powered down with the battery removed, then the information is completely cold/offline, and impossible to retrieve from a distance. I would suggest seeing a psychiatrist. That is a professional opinion.
|
|
|
|
2112
Legendary
Offline
Activity: 2128
Merit: 1073
|
|
September 06, 2013, 08:38:16 PM |
|
Wondered about this one for years and played around with it a bit. I've never got it working, but it should be possible to compress a big string of binary into a series of calculations, the initial value forms a seed and would probably contain the number of bits for the start value, the number of bits for the formula codes, the number of steps to completion and the initial value and formula codes. When that's run it would give a string of binary with the first few bits representing the number of bits for this runs formula code followed by the formula code and the remaining bits are the value to be acted on. The trouble is so many bits of data can only have so many combinations so it would be impossible for them to contain the same amount of information as more bits of data.
http://en.wikipedia.org/wiki/Fractal_compression
|
|
|
|
mberg2007
Member
Offline
Activity: 117
Merit: 10
|
|
September 06, 2013, 09:48:04 PM |
|
Basic information theory says it is not possible to produce an algorithm that reduces the size of every possible input sequence. No matter what magical algorithm you are using, I will be able to construct input that, when compressed, take up as much or more space than before.
If you use a magical "dna sequence" (hash value call it what you will) with a length of 128 bytes, that gives you 2^1024 possible ways to represent a file. The problem is that I can construct 2^1024+1 different files, and it has to follow then that two of these files will produce the same hash value. You will have no way of knowing which of these two files was used as the original input for your algorithm.
There is no "loophole" or "alternative dimensions" that will allow you to represent 2^1024+1 unique values with 2^1024 bits.
-Michael
|
|
|
|
murraypaul
|
|
September 06, 2013, 09:49:59 PM |
|
The world will be fundamentally changed should we be able to pull this off. If not, I'll be the biggest laughingstock who ever lived, I suppose. No, I'm afraid you'll be just another internet kook noone had ever heard of.
|
BTC: 16TgAGdiTSsTWSsBDphebNJCFr1NT78xFW SRC: scefi1XMhq91n3oF5FrE3HqddVvvCZP9KB
|
|
|
Buffer Overflow
Legendary
Offline
Activity: 1652
Merit: 1016
|
|
September 06, 2013, 10:39:46 PM |
|
If I follow your advice here, I would write out my idea, post it somewhere and wait for glory to come. Meanwhile, someone else with money would take my idea, actually formulate a software program with it, patent that, and since he came to the patent office first, he would get the credit, not me.
Release it on an open source license then.
|
|
|
|
BurtW
Legendary
Offline
Activity: 2646
Merit: 1138
All paid signature campaigns should be banned.
|
|
September 06, 2013, 11:33:29 PM Last edit: September 07, 2013, 04:46:15 PM by BurtW |
|
There is no "loophole" or "alternative dimensions" that will allow you to represent 21024+1 unique values with 1024 bits.
-Michael
FIFY
|
Our family was terrorized by Homeland Security. Read all about it here: http://www.jmwagner.com/ and http://www.burtw.com/ Any donations to help us recover from the $300,000 in legal fees and forced donations to the Federal Asset Forfeiture slush fund are greatly appreciated!
|
|
|
brogramer
Newbie
Offline
Activity: 15
Merit: 0
|
|
September 07, 2013, 01:11:41 AM |
|
Generally if you understand how computers work... and what exactly "32-bits" represents - aka 4billion possible combos. You start to get a very VERY crystal clear picture of how hard compression is.
To be honest, most compression now adays is for very very specific types of data (mp4/mp3/jpg - and "lossy"). General multipurpose true (as in what you compress you get _exactly_ back when you de-compress) - compression - IE zip/rar/etc aren't very powerful on the grand scale of things. They just remove "wasted" space.
|
|
|
|
spndr7
Legendary
Offline
Activity: 1032
Merit: 1000
|
|
September 07, 2013, 01:57:20 AM |
|
@Basic miner Consider putting your idea also on Encode.ru the biggest data compression forum.The winners of hutter prize are its members and regularly post there.
|
buzzeo.in - buzz GEO location
|
|
|
b!z
Legendary
Offline
Activity: 1582
Merit: 1010
|
|
September 07, 2013, 04:32:34 AM |
|
@Basic miner Consider putting your idea also on Encode.ru the biggest data compression forum.The winners of hutter prize are its members and regularly post there. Everybody will laugh at him :-)
|
|
|
|
|