B(asic)Miner (OP)
Newbie
Offline
Activity: 28
Merit: 0
|
|
September 09, 2013, 05:08:43 PM |
|
BurtW, no.... I am not trying to locate sequences of digits in Pi. I've heard of that in my own research and felt that was not the way to go. I was told hunting for strings in Pi couldn't possibly work, and it also would have required math that I am not capable of in the first place.
Pi is an Index that we move through as we are encoding. Movement is based on whether the digits are 0s or 1s. Movement occurs in 8 bit increments (one character at a time), therefore all data must be read in its native format and if not in binary format (such as hexidecimal code isn't) it must be converted to Ascii Binary. Starting from the decimal, we read forward 8 bits in per character. I have tested this and the general rule is 100 indexes per byte.
murraypaul, I understand that you are very smart and that you know how logical math works, but are you going to sit here and tell me that every one of those possible combinations of 64-bit files are arranged into meaningful orderly arrangements? No. Most of the arrangements are going to be nothing but white noise. If I go and create an orderly 64-bit image, I'm not sure that image wasn't already ordered in Pi to begin with. There might be 4.29 billion unique 32 bit files possible, but are there actually 4.29 billion such files in existence now? Could there even be? I doubt it. How much of 4.29 billion unique combinations did the Nintendo era actually use in creating its games? Probably 1%. And then we moved on to 64 bit games. And then we moved on again. We never sat too long in any one size before expanding the sizes we needed and creating vastly more incredible works.
Besides, I know my theory won't work with one long file because I believe it would take 100 years for the CPU/GPU whatever to actually find the sequence going from the Index End point in Pi backwards to the decimal point, verifying that it's the one unique path to the start of Pi after the decimal point. So I would be working with 500 meg chunks, called Mega Chunks. For every 2 GB of data, there would be 4 Mega Chunks for example. For Crypto Codes, as my example above for the video file represents.
Vladamir, what is your trip? I swear, sometimes you programmers can be so negative about everything and just shoot everything down without caring that there is a person on the end of your scathing words that is trying to do something cool. I don't need such nonsense here. Help me or stay out of the aisleway. I'm working here.
|
|
|
|
B(asic)Miner (OP)
Newbie
Offline
Activity: 28
Merit: 0
|
|
September 09, 2013, 05:20:11 PM |
|
Here is a very simple example.
We know pi out to who know how many digits - for the sake of this example let's call it "enough".
Your phone number is 123-456-7890
You find your phone number and it happens to be at location 987,654 in the number pi.
You can say "My phone number is 123-456-7890". This takes 10 digits.
>>snip
BurtW, come on man. This is funny but only if you are not really trying to mock me. I get the humor, but still. I've said before, and will say again, this is not for small compression of under 1 megabytes of data. Since I am also assigning one Crypto Key per set amount of data (for example, 500 Megabytes unless that doesn't work so well) each Crypto Key for files larger than the set size would have multiple crypto keys, meaning combinations of crypto keys, further adding to the complexity of the overall N bits of Index, since then we would be combining Crypto Keys into new patterns as well as relying on a singular Pi index. I am beginning to understand what you are telling me about possibilities, but you also must know that not every possibility contains a valid thought-organized answer, some would always be random noise. Perhaps Nature has a set limit on the amount of creativity possible, which is why evolution causes us to change in the first place, so to avoid a bufferoverflow error of some sort. (Okay, that's my own attempt at a joke).
|
|
|
|
murraypaul
|
|
September 09, 2013, 05:24:29 PM |
|
murraypaul, I understand that you are very smart and that you know how logical math works, but are you going to sit here and tell me that every one of those possible combinations of 64-bit files are arranged into meaningful orderly arrangements? No. Most of the arrangements are going to be nothing but white noise. Which is why compression programs like zip and rar generally do produce smaller files, because they target particular patterns of data which tend to be seen. General purpose compression routines can only get you so far though, which is why there are specialist audio and video compression routines, which target the common types of data seen in those arenas. But you started off with a general claim about being able to compress all files. I think you now understand that you can't. So now you realise you haven't got something magical, the next question is: What makes you think your method is better than those which already exist, and have been comprehensively studied for years or even decades?
|
BTC: 16TgAGdiTSsTWSsBDphebNJCFr1NT78xFW SRC: scefi1XMhq91n3oF5FrE3HqddVvvCZP9KB
|
|
|
B(asic)Miner (OP)
Newbie
Offline
Activity: 28
Merit: 0
|
|
September 09, 2013, 05:31:00 PM |
|
Isn't it possible we could work with a large enough index key that the N bits you are referring to are larger than the number of files in existence at present or any time in the near future? How many total files are there in the world at this point? Does anyone know? You could pick some arbitrarily large value of N, sure. But the second shoe, which hasn't dropped yet, is that your index key will (on average) be the same size as the files you are trying to compress. So it takes up just as much space (on average) to store the index key as it would to store the file itself. To compress 1MB files, you need a 1MB index key. To compress 100GB files, you need a 100GB index key. But where is that space at? It's in memory. Not hard drive space. The key gets expanded to the 1GB or 2GB size limit, sure. Then you encode your file forward in memory, and once the final Crypto Key is obtained, you dump the program, which dumps the Pi Index file too. Now you send your friend the key via email. He opens the software on the other end, which loads Pi into memory again, sure, but the point is that the internet never saw the data. The data did not choke up the middleground. No data had to actually travel over the servers, no file analysis was done on it by the NSA, and a host of other cool benefits. Also, as I mentioned before, I am not going to be attacking 100GB files directly. We can re-use the same 1 GB or 2GB Pi index and cut the program into chunks and direct each chunk through the same pipeline, so that the final file looks like this: [OPENFILE] [filesize=100000000000000_&_Name="100GBofStuffForWhomever.avi"] [MChunks=50_&_REM: Begin Chunk(s) on Next Line! ] [1,w, xxxxxx, y, zzzz] [2,w, xxxxxx, y, zzzz] [3,w, xxxxxx, y, zzzz] [4,w, xxxxxx, y, zzzz] [5,w, xxxxxx, y, zzzz] [6,w, xxxxxx, y, zzzz] [7,w, xxxxxx, y, zzzz] [8,w, xxxxxx, y, zzzz] ... etc [CLOSEFILE] which then get stitched back together at decompression time onto the hard drive in the original form.
|
|
|
|
kslavik
Sr. Member
Offline
Activity: 441
Merit: 250
GET IN - Smart Ticket Protocol - Live in market!
|
|
September 09, 2013, 05:31:08 PM |
|
murraypaul, I understand that you are very smart and that you know how logical math works, but are you going to sit here and tell me that every one of those possible combinations of 64-bit files are arranged into meaningful orderly arrangements? No. Most of the arrangements are going to be nothing but white noise. If I go and create an orderly 64-bit image, I'm not sure that image wasn't already ordered in Pi to begin with. There might be 4.29 billion unique 32 bit files possible, but are there actually 4.29 billion such files in existence now? Could there even be? I doubt it. How much of 4.29 billion unique combinations did the Nintendo era actually use in creating its games? Probably 1%. And then we moved on to 64 bit games. And then we moved on again. We never sat too long in any one size before expanding the sizes we needed and creating vastly more incredible works.
Forget about 8/16/32/64 bits games or programs because those are sizes of the instructions and registers computers use to execute instructions and it has nothing to do with a topic in hand. I gave you 64 bit just to illustrate how many possible combinations a file of 8 bytes can contains I could of given you a file of size 72 bits or 1 kilobyte as an example. Do you know how many possible files could be created with the size of 1Kb exactly? 2^8192 do you know how big is this number?. How are you going to represent all those files with and index which is less than 1Kb in size? You are right that so many files do not even exist, but how would you know once you use your "method" to "decompress" this file that the file you have found is the right one?
|
████ ███ ███ ████ ███ ███ ███ ███ ████ ███ ███ ███ ███ ███ ███ ████ ███ ███ ██ ███ ███ █████████████████ ███ ███ ███ ██ ███ ███ ██ ██ ███ ██████████ ███ ███ ██████ ███ ███ ██ ███ ███ ███ ███ ███ ███ ███ ████
| | GUTS | | ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███
| | smart-ticket protocol for events ✔ live product with market traction! | | ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███
| | ▶ BTC ANN ▶ WEBSITE ▶ BLOG
| | ▶ SANDBOX ▶ WHITEPAPER ▶ BOUNTY
| |
|
|
|
B(asic)Miner (OP)
Newbie
Offline
Activity: 28
Merit: 0
|
|
September 09, 2013, 05:45:41 PM |
|
murraypaul, I understand that you are very smart and that you know how logical math works, but are you going to sit here and tell me that every one of those possible combinations of 64-bit files are arranged into meaningful orderly arrangements? No. Most of the arrangements are going to be nothing but white noise. Which is why compression programs like zip and rar generally do produce smaller files, because they target particular patterns of data which tend to be seen. General purpose compression routines can only get you so far though, which is why there are specialist audio and video compression routines, which target the common types of data seen in those arenas. But you started off with a general claim about being able to compress all files. I think you now understand that you can't. So now you realise you haven't got something magical, the next question is: What makes you think your method is better than those which already exist, and have been comprehensively studied for years or even decades? What would you call being able to backup all of your movies in total Blueray or HD quality to your USB thumbdrive and being able to carry them in your pocket for use at a friend's house anytime you wanted? That would be coolness. Well, what would you call being able, as a CEO of a company for example, to add a whole bunch of important files into a rar file that you need for a meeting, but which are top secret and there are spies everywhere trying to get your data, but you put it in a container in Nature (more aptly put, you pull out of Nature a name for that file which is its thumbprint) and take that code in your own mind with you to the meeting, and then generate that data on the target computer without the use of any USB drives, Cloud Caches, internal servers, etc... I would call that better security than most, because chances are, people would not even know you are carrying anything with you in the first place. And you can leave your sensitive laptop and truecrypt drives behind. What would you call being able to send a huge videogame over the internet in moments and install itself without the need to host huge expensive servers? I would call that a huge windfall for businesses. No more huge server farms to send huge files or do downloads or updates. Files are compressed under this theory and sent in 4k packets. You could use a standard dial up modem and still achieve downloads of what would amount to Gigabytes in mere moments on the client end of things. It goes on and on. Come on, are you saying this isn't a valid idea in the first place? Now I am beginning to question if you are for real. Maybe you are just heckling me for the fun of it. In that case, let this thread die and go back to your normal lives. Jeesh. People are unbelievable.
|
|
|
|
Sage
|
|
September 09, 2013, 05:49:58 PM |
|
"All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident.
Arthur Schopenhauer, German philosopher (1788 – 1860)"
Kinda reminds me of the feedback I got explaining a decentralized currency that would eventually take down central banks (pre-Bitcoin), but lacked any technical abilities to make it a reality (thank God for Bitcoin).
Don't let them get you down.
If you belief in your idea, consider this approach... whats the absolute minimum viable proof of concept? What would it take to create it? Then find a way to get proof of concept working.
|
|
|
|
murraypaul
|
|
September 09, 2013, 06:04:48 PM |
|
Isn't it possible we could work with a large enough index key that the N bits you are referring to are larger than the number of files in existence at present or any time in the near future? How many total files are there in the world at this point? Does anyone know? You could pick some arbitrarily large value of N, sure. But the second shoe, which hasn't dropped yet, is that your index key will (on average) be the same size as the files you are trying to compress. So it takes up just as much space (on average) to store the index key as it would to store the file itself. To compress 1MB files, you need a 1MB index key. To compress 100GB files, you need a 100GB index key. But where is that space at? It's in memory. Not hard drive space. The key gets expanded to the 1GB or 2GB size limit, sure. No, the key is that size. You cannot (on average across all files of that size) compress a 1MB file to an index smaller than 1MB.
|
BTC: 16TgAGdiTSsTWSsBDphebNJCFr1NT78xFW SRC: scefi1XMhq91n3oF5FrE3HqddVvvCZP9KB
|
|
|
murraypaul
|
|
September 09, 2013, 06:06:35 PM |
|
"All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident.
Arthur Schopenhauer, German philosopher (1788 – 1860)"
"The fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown." Carl Sagan
|
BTC: 16TgAGdiTSsTWSsBDphebNJCFr1NT78xFW SRC: scefi1XMhq91n3oF5FrE3HqddVvvCZP9KB
|
|
|
kslavik
Sr. Member
Offline
Activity: 441
Merit: 250
GET IN - Smart Ticket Protocol - Live in market!
|
|
September 09, 2013, 06:08:49 PM |
|
What would you call being able to backup all of your movies in total Blueray or HD quality to your USB thumbdrive and being able to carry them in your pocket for use at a friend's house anytime you wanted? That would be coolness.
Well, what would you call being able, as a CEO of a company for example, to add a whole bunch of important files into a rar file that you need for a meeting, but which are top secret and there are spies everywhere trying to get your data, but you put it in a container in Nature (more aptly put, you pull out of Nature a name for that file which is its thumbprint) and take that code in your own mind with you to the meeting, and then generate that data on the target computer without the use of any USB drives, Cloud Caches, internal servers, etc... I would call that better security than most, because chances are, people would not even know you are carrying anything with you in the first place. And you can leave your sensitive laptop and truecrypt drives behind.
What would you call being able to send a huge videogame over the internet in moments and install itself without the need to host huge expensive servers? I would call that a huge windfall for businesses. No more huge server farms to send huge files or do downloads or updates. Files are compressed under this theory and sent in 4k packets. You could use a standard dial up modem and still achieve downloads of what would amount to Gigabytes in mere moments on the client end of things.
It goes on and on.
Come on, are you saying this isn't a valid idea in the first place? Now I am beginning to question if you are for real. Maybe you are just heckling me for the fun of it. In that case, let this thread die and go back to your normal lives. Jeesh. People are unbelievable.
You tried to create an "Magic" algorithm to compress/encrypt data at the same time. How one could do it without any knowledge of the informational theory or even ability to program or even knowing what is out there already. I call it blissful ignorance and yes, your idea to compress huge amount of data into a simple reference is stupid. I did call your idea stupid because it is based on your ignorance and luck of the knowledge and your refusal to educate yourself before trying to find investors for your pipe dream.
|
████ ███ ███ ████ ███ ███ ███ ███ ████ ███ ███ ███ ███ ███ ███ ████ ███ ███ ██ ███ ███ █████████████████ ███ ███ ███ ██ ███ ███ ██ ██ ███ ██████████ ███ ███ ██████ ███ ███ ██ ███ ███ ███ ███ ███ ███ ███ ████
| | GUTS | | ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███
| | smart-ticket protocol for events ✔ live product with market traction! | | ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███
| | ▶ BTC ANN ▶ WEBSITE ▶ BLOG
| | ▶ SANDBOX ▶ WHITEPAPER ▶ BOUNTY
| |
|
|
|
murraypaul
|
|
September 09, 2013, 06:08:54 PM |
|
murraypaul, I understand that you are very smart and that you know how logical math works, but are you going to sit here and tell me that every one of those possible combinations of 64-bit files are arranged into meaningful orderly arrangements? No. Most of the arrangements are going to be nothing but white noise. Which is why compression programs like zip and rar generally do produce smaller files, because they target particular patterns of data which tend to be seen. General purpose compression routines can only get you so far though, which is why there are specialist audio and video compression routines, which target the common types of data seen in those arenas. But you started off with a general claim about being able to compress all files. I think you now understand that you can't. So now you realise you haven't got something magical, the next question is: What makes you think your method is better than those which already exist, and have been comprehensively studied for years or even decades? What would you call being able to backup all of your movies in total Blueray or HD quality to your USB thumbdrive and being able to carry them in your pocket for use at a friend's house anytime you wanted? That would be coolness. [...] Come on, are you saying this isn't a valid idea in the first place? Now I am beginning to question if you are for real. Maybe you are just heckling me for the fun of it. In that case, let this thread die and go back to your normal lives. Jeesh. People are unbelievable. Yes, that is what we have been saying right from the beginning. Your claim that you have a compression method that will shrink all files is simply not possible. We've shown you the maths proving this. The fact that it would be really cool if it was true doesn't just make it true.
|
BTC: 16TgAGdiTSsTWSsBDphebNJCFr1NT78xFW SRC: scefi1XMhq91n3oF5FrE3HqddVvvCZP9KB
|
|
|
Buffer Overflow
Legendary
Offline
Activity: 1652
Merit: 1016
|
|
September 09, 2013, 06:33:58 PM |
|
Come on, are you saying this isn't a valid idea in the first place? Now I am beginning to question if you are for real. Maybe you are just heckling me for the fun of it. In that case, let this thread die and go back to your normal lives. Jeesh. People are unbelievable.
At long last the penny has finally dropped! Oh well, another million dollar idea collapses. Back to the drawing board....
|
|
|
|
B(asic)Miner (OP)
Newbie
Offline
Activity: 28
Merit: 0
|
|
September 09, 2013, 06:55:52 PM |
|
Come on, are you saying this isn't a valid idea in the first place? Now I am beginning to question if you are for real. Maybe you are just heckling me for the fun of it. In that case, let this thread die and go back to your normal lives. Jeesh. People are unbelievable.
At long last the penny has finally dropped! Oh well, another million dollar idea collapses. Back to the drawing board.... Maybe for you, but not for me. I've spent years on this, and about 1 year testing it by hand. I had a computer-programmer friend helping me for a time, but he had some work issues and couldn't stay with it for long, so I got another friend to help me. I taught him the rules and got him to send me Crypto Codes of 3 bytes, or 6 bytes, at a time to me following my rules. It would take all day to do just one of them by hand, but eventually I would return the answer and my friend was totally amazed when I got each one right. There was ever only one possible answer to be found. The math itself doesn't lend any credence to the idea that just because there are 4.29 billion combinations, all 4.29 billion combinations are orderly arrangements that would even return a working file to begin with. Nature probably has some kind of limits built into it inherently, whereby only 10-20% of all the possibilities will ever be arranged into an orderly thing which, on a computer medium, would be called a working file. The chances that the Pi Index would actually require every Pi index to be the end of a working file seems ludicrous, Nature would then be shown not to be random or chaos filled at all, but totally derivitive and pre-programmed, where even static or noise would mean something if you knew how to organize it. We are getting into philosophy here, and I don't wish to do that. Bottom line, none of you really understand my theory well enough to be able to discredit it so quickly, you have no right. You are not God. You may be smart, but you also don't see all ends, either, as history has shown among scientists most create something that goes on to become the rope for the world to hang from. I guess by trying to create this technique, I am jumping in the scientific boat, but still ... By the way, I'm not blaming anyone for not understanding my theory, I haven't released all the rules or explained everything, how could you? You are only guessing at this point. Of you all, at least BurtW seems to understand by the way he asks questions, I believe he is sincere and trying to understand, and even offer some good feedback. Thanks to BurtW for making my day, even when you disagree, you do so with respect in your tone. I appreciate that.
|
|
|
|
murraypaul
|
|
September 09, 2013, 07:09:19 PM |
|
Bottom line, none of you really understand my theory well enough to be able to discredit it so quickly, you have no right. You are not God. No, I am a mathematician. We don't need to understand exactly how your 'compression' algorithm is meant to work. You cannot compress gigabyte files into a couple of dozen characters. Can't be done. N digits of alphanumeric index can only index 62^N maximum possible unique files. Fact.
|
BTC: 16TgAGdiTSsTWSsBDphebNJCFr1NT78xFW SRC: scefi1XMhq91n3oF5FrE3HqddVvvCZP9KB
|
|
|
Buffer Overflow
Legendary
Offline
Activity: 1652
Merit: 1016
|
|
September 09, 2013, 07:13:16 PM |
|
Your trying to fit a pint of milk into a half-pint glass. Can't be done.
|
|
|
|
ZephramC
|
|
September 09, 2013, 07:21:31 PM |
|
I have simple questions for you. - Can you compress ANY file (to 0.2% of original size or whatever number lower than 100%)? - How fast is your compression an how fast is decompression? - How much additionall data do you need for decompression? (For example, you compress Blue-Ray movie to 100 000 bytes, you sned me those 100 000 bytes, but I need some other data. At least your decompression program. So how much additionall data [not counting operating system like Linux or Windows] do I need?) - How much can you compress 1000 bytes file (text, audio, video, random data)?
|
|
|
|
|
BurtW
Legendary
Offline
Activity: 2646
Merit: 1137
All paid signature campaigns should be banned.
|
|
September 09, 2013, 10:03:28 PM |
|
Is it something like this?Maybe that as the base code + "Variable run length search and lookup!" (see the bottom of the description under future enhancements) Now this idea does have some things in common with mpeg: 1) Encoding is a long laborious tedious process. And, the more resources you spend encoding (in this case time and money spent doing the variable run length searches) the smaller the compressed file. The encoding could be done on "very big iron" in order to compress high value content like the latest Adam Sandler POS movie. 2) Theoretically the decode could be fast / real time and most importantly it can be done in parallel. Knowing the next N metadata descriptors you can immediatly calculate all of them in parallel. And a hardware decoder is theoretically possible. Now to see if it is practical we can take a search space S = a number of digits of pi, and calculate the average run length possible within this search space given random input data. Since each metadata descriptor would have a starting bit or byte location and a number of bits or bytes the average metadata descriptor size must be smaller than the average random sequence found in the search space. One other thing to note is that the size of the search space S only affects encoder search time and cost so it can be made very large. It does not affect the decoder cost and only affects the metadata size in that every time you double S you have to add one more bit to the location parameter in the metadata descriptor. Hmmmmm....
|
Our family was terrorized by Homeland Security. Read all about it here: http://www.jmwagner.com/ and http://www.burtw.com/ Any donations to help us recover from the $300,000 in legal fees and forced donations to the Federal Asset Forfeiture slush fund are greatly appreciated!
|
|
|
robertde
Newbie
Offline
Activity: 26
Merit: 0
|
|
September 10, 2013, 02:21:44 AM |
|
What an idea..The maths dont add up though.
|
|
|
|
rigel
Legendary
Offline
Activity: 1240
Merit: 1001
Thank God I'm an atheist
|
|
September 10, 2013, 02:37:10 AM |
|
Is it something like this?ROFL you made my day thank you sir !
|
|
|
|
|