Bitcoin Forum
December 15, 2024, 12:18:57 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 [7] 8 9 10 11 12 13 14 15 16 17 »  All
  Print  
Author Topic: Crypto Compression Concept Worth Big Money - I Did It!  (Read 13900 times)
zathras
Sr. Member
****
Offline Offline

Activity: 266
Merit: 250



View Profile
September 10, 2013, 02:38:35 AM
 #121

From a cursory reading I'd say the cost of energy required to 'brute force' a solution (ie recreate the file) just from a cryptographic hash of said file is orders of magnitude higher than the raw transmission cost.

Smart Property & Distributed Exchange: Master Protocol for Bitcoin
B(asic)Miner (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
September 10, 2013, 05:16:57 AM
 #122

Is it something like this?

Is this for real, or a is it a joke?  Did someone just make this site to mock me, or is this for real? 

The description used mirrors my idea exactly .... perhaps someone has already discovered what I am trying to get done.  If this is so, what a pity.  My dream goes to another. 
B(asic)Miner (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
September 10, 2013, 05:46:54 AM
 #123

Is it something like this?
Maybe that as the base code + "Variable run length search and lookup!" (see the bottom of the description under future enhancements)

Now this idea does have some things in common with mpeg:

1) Encoding is a long laborious tedious process.  And, the more resources you spend encoding (in this case time and money spent doing the variable run length searches) the smaller the compressed file.  The encoding could be done on "very big iron" in order to compress high value content like the latest Adam Sandler POS movie.

2) Theoretically the decode could be fast / real time and most importantly it can be done in parallel.  Knowing the next N metadata descriptors you can immediatly calculate all of them in parallel.  And a hardware decoder is theoretically possible.

Now to see if it is practical we can take a search space S = a number of digits of pi, and calculate the average run length possible within this search space given random input data.

Since each metadata descriptor would have a starting bit or byte location and a number of bits or bytes the average metadata descriptor size must be smaller than the average random sequence found in the search space.

One other thing to note is that the size of the search space S only affects encoder search time and cost so it can be made very large.  It does not affect the decoder cost and only affects the metadata size in that every time you double S you have to add one more bit to the location parameter in the metadata descriptor.

Hmmmmm....


My theory's encoding speed is the opposite of the industry, then, in this case.  The encoding time is real-time encoding.  Meaning you can record video straight into the formula and all that gets saved to disc is a 4 kilobyte file.  In this case, it's the decoding that super slow, at least I think it would be. Without testing, I couldn't be certain.

During my research, I found that the average space needed to encode 8 bits (1 byte) is 100 to 150 Index points into Pi per byte.  Thus, if you have a 1024 K file, you would need 12,240 index points to reveal the thumbprint of that data and obtain the Crypto Code.   If my math is right (and it might not be, so please check me, I'm not good at math) it would take 12.24 BILLION index points into Pi to store 1 GB of data.  I don't want to go that far into Pi because then just trying to find the timeline in all that branching data could take lifetimes, so I want to break up the file into multiple Crypto Keys, spaced evenly.  SO lets say 250 Megabytes or 500 Megabytes .... Or Even 100 Megabytes if that is faster. Since each 100 megabytes of data would be represented by a single line, adding more lines or more chunks wouldn't really cost anything:

100 Megabyte File Stuffed In 100 Megabyte Splits
[OPENFILE]
[filesize=100000000_&_Name="100MBofStuffForWhomever.zip"]
[MChunks=1_&_REM:  Begin Chunk(s) on Next Line! ]
[1,w, xxxxxx, y, zzzz]
[CLOSEFILE]

400 Megabyte File Stuffed In 100 Megabyte Splits
[OPENFILE]
[filesize=400000000_&_Name="400MBofStuffForWhomever.zip"]
[MChunks=4_&_REM:  Begin Chunk(s) on Next Line! ]
[1,w, xxxxxx, y, zzzz]
[2,w, xxxxxx, y, zzzz]
[3,w, xxxxxx, y, zzzz]
[4,w, xxxxxx, y, zzzz]
[CLOSEFILE]

each using the same 1 GB index key from Pi to be encoded through.  The data is then searched backwards from that final Index point in Pi to the decimal point (or start of Pi) to find the one unique timeline that fits all the criteria.  Since this approach uses a way of tracking how many 1s are in the data, what the initial conditions are at the start of Pi, it can use this information to search backwards from the ending point of each Mega Chunk of data to find the unique timeline.  Once a timeline is captured, you now have the data that was originally encoded by measuring a set of changes you made while travelling forward in Pi.  There is thus only one unique route taken, and thus only one unique data set that can be found for the entire sequence.  Even if you arrive at the same number in Pi as and ending point, the individual changes made inside the route can also be tracked to ensure uniqueness.  Even if ten different files land on the same ending point in Pi, that does not mean the theory is broken, since its the signature of the timeline itself and how it fits like a key into a very exact and precise lock.  When this is found, the program can draw out the data by analyzing and reconstituting the 0s and 1s precisely as they were added in during the path taken.

I realize this doesn't seem to make sense, because language cannot express what I have seen, or at least MY language cannot express what I have seen to be true and demonstrated on paper.  I'm sure many people have had ideas they have failed to be able to adequately communicate but which they could see themselves clearly, like Leonardo Davinci.
Buffer Overflow
Legendary
*
Offline Offline

Activity: 1652
Merit: 1016



View Profile
September 10, 2013, 06:23:55 AM
 #124

Is it something like this?

Is this for real, or a is it a joke?  Did someone just make this site to mock me, or is this for real? 

The description used mirrors my idea exactly .... perhaps someone has already discovered what I am trying to get done.  If this is so, what a pity.  My dream goes to another. 

So this is where you copied the idea from.

Asking for investors, yet was open source all along.

rigel
Legendary
*
Offline Offline

Activity: 1240
Merit: 1001


Thank God I'm an atheist


View Profile
September 10, 2013, 06:59:32 AM
 #125

Is it something like this?

Is this for real, or a is it a joke?  Did someone just make this site to mock me, or is this for real?  

The description used mirrors my idea exactly .... perhaps someone has already discovered what I am trying to get done.  If this is so, what a pity.  My dream goes to another.  

Code is fully functional BUT it is a joke (published on 1 april)

Anyone who knows a little math can tell compressed files will be BIGGER then the original.

Someone else told you that:

We don't need to understand exactly how your 'compression' algorithm is meant to work.
You cannot compress gigabyte files into a couple of dozen characters. Can't be done.
N digits of alphanumeric index can only index 62^N maximum possible unique files. Fact.

This is also VERY VERY SLOW

You are not a teacher
ZephramC
Sr. Member
****
Offline Offline

Activity: 475
Merit: 255



View Profile
September 10, 2013, 07:01:52 AM
 #126

When I was about 13-14 years old, I had a similar (yet simpler) idea of "compressing" files by running loooong seeded random number sequence and waiting for sequence-to-be-compressed to appear.
Took me some time to realize why it can not work. (Well, It can work, but the indexes pointing into sequence of random numbers [and same applies for pi] would be bigger than the data I wand to compress.)

The basic problem is lack of imagination. Every file appears in Pi, that is right. But the first time, first decimal place, some specific 1024 bytes file appears in Pi can be so high number that you can not write it by 2500 decimal places (that means that you can not write it by even with 1024 base256 numerals [= bytes]).
ZephramC
Sr. Member
****
Offline Offline

Activity: 475
Merit: 255



View Profile
September 10, 2013, 07:10:18 AM
 #127

You can try to "compress" 3 byte NUMBER sequences (000-999) by waiting for them to appear in Pi. For some of them, you will wait longer than 10^3 = 1000 decimal places to appear.

This is a must, because number of possible sequences to be compressed is always bigger or equal the number of possible indexes/decimal places/descriptor/metadata pointers.


I am sorry to disappoint you though :-/
btc4ever
Sr. Member
****
Offline Offline

Activity: 321
Merit: 250


View Profile
September 10, 2013, 08:20:48 AM
 #128

http://blog.dave.io/2013/03/fs-a-filesystem-capable-of-100-compression/

according the above-linked description, pifs works by storing an index for every byte.  So you usually end up with the indexes being larger than the compressed data.  I get that.  Also, looking for a full file (byte-for-byte) in pi is a computationally expensive proposition.

Question: What if instead we use indexes to fixed (or variable) length sequences instead?  But only when the sequence length is greater than the index size.   And pre-calc pi up to some limit so we can just lookup index into pi given a sequence.  If not found, just store the input bytes instead.  When compressing and decompressing, we could also check for rotated values of our input string.  eg:  1,2,3,5 could also match 2,3,4,6 or 3,4,5,7 in our lookup table -- so long as we store a rotation offset.

waiting for all the mathematians to explain why this is unworkable.    ;-)


Psst!!  Wanna make bitcoin unstoppable? Why the Only Real Way to Buy Bitcoins Is on the Streets. Avoid banks and centralized exchanges.   Buy/Sell coins locally.  Meet other bitcoiners and develop your network.   Try localbitcoins.com or find or start a buttonwood / satoshi square in your area.  Pass it on!
murraypaul
Sr. Member
****
Offline Offline

Activity: 476
Merit: 250


View Profile
September 10, 2013, 08:47:20 AM
 #129

Take 0000001  and    0001000   and  100000 for example.   The index for each is, respectively:

BYTE EXAMPLE:              0000001:       0001000:        100000:
    Pi Index:                      (57)               (85)             (103)

To try to drive this home, lets use your own example.
You have encoded three 7 bit numbers (actually one of them is only 6 bits, but I'll assume that was a typo.
A 7 bit number can represent the values 0-127.
A 6 bit number can represent the values 0-63. And so on.
When encoding your three 7 bit values, for two of them your resulting index is also a 7 bit value.
So your index needs just as much space to be stored as the original data did.
This is what we've been trying to tell you, and your own example has shown it to be true.

BTC: 16TgAGdiTSsTWSsBDphebNJCFr1NT78xFW
SRC: scefi1XMhq91n3oF5FrE3HqddVvvCZP9KB
murraypaul
Sr. Member
****
Offline Offline

Activity: 476
Merit: 250


View Profile
September 10, 2013, 08:49:24 AM
 #130


And just to quote from that:
Quote
A gentleman by the name of Philip Langdale has taken these properties, and built an inspired, creative, and completely useless filesystem around them. πFS eschews the traditional filesystem concept of taking data and coming up with a way to store it on disk. Instead, it stores each byte as an offset in π. Even for something as gloriously ludicrous as πFS, it would be computationally infeasible to seek through π for the occurrence of the whole file in sequence, so instead it stores offsets in π for each byte of the file.

Is this useful? Not even slightly. Enormous and unpredictable (in very literal terms) processing requirements aside, the maximum size of a byte is, well, a byte – 8 bits. This allows for offset values between 0-255 (256 possible values). Unfortunately, the first 256 digits of π don’t contain all possible bytes. This means that to store a byte, or in this case an offset in π for your byte, you will probably have to use more than one byte. It’s completely useless.

It is, however, really cool.

We aren't just being mean to you, it really doesn't work.

BTC: 16TgAGdiTSsTWSsBDphebNJCFr1NT78xFW
SRC: scefi1XMhq91n3oF5FrE3HqddVvvCZP9KB
Professor James Moriarty
aka TheTortoise
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250



View Profile
September 10, 2013, 09:05:03 AM
 #131


 I do believe this can be done , maybe not by op , maybe not anytime soon , but one day this idea will be done . 50 years ago whe had no internet , we created our own made up currency bitcoins on this internet because we were sick of our countries currency and that bitcoin is over 100$ each now , if anything bitcoin tought me it is everything is possible . Not now maybe , but one day.
rigel
Legendary
*
Offline Offline

Activity: 1240
Merit: 1001


Thank God I'm an atheist


View Profile
September 10, 2013, 09:15:41 AM
 #132

Question: What if instead we use indexes to fixed (or variable) length sequences instead?  But only when the sequence length is greater than the index size.   

You don't know if index will be smaller or larger BEFORE calculating it.
In most cases (more than 50%) it will be larger
In some cases it will be a little smaller
You also have to store an additional information for every block: am I saving an index or the sequence?

Compressed file will be bigger than 50% of the original file (perhaps bigger then the original file because of additional informations)

And pre-calc pi up to some limit so we can just lookup index into pi given a sequence.

So the exe becomes huge and needs a lot of RAM

When compressing and decompressing, we could also check for rotated values of our input string.  eg:  1,2,3,5 could also match 2,3,4,6 or 3,4,5,7 in our lookup table -- so long as we store a rotation offset.

You reduce exe size but have even more informations to store in the compressed file (making it bigger) and need more computational power (pre-calc becomes quite useless)
rigel
Legendary
*
Offline Offline

Activity: 1240
Merit: 1001


Thank God I'm an atheist


View Profile
September 10, 2013, 09:30:46 AM
 #133


 I do believe this can be done , maybe not by op , maybe not anytime soon , but one day this idea will be done . 50 years ago whe had no internet , we created our own made up currency bitcoins on this internet because we were sick of our countries currency and that bitcoin is over 100$ each now , if anything bitcoin tought me it is everything is possible . Not now maybe , but one day.

Probably you are wrong.

Internet was created by men
bitcoins price is decided by men

math was not created by men, nowadays we have no way to change it

I think in the future 1+1 will always be 2 (can't be absolutely sure about this)
B(asic)Miner (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
September 10, 2013, 09:44:38 AM
 #134

Take 0000001  and    0001000   and  100000 for example.   The index for each is, respectively:

BYTE EXAMPLE:              0000001:       0001000:        100000:
    Pi Index:                      (57)               (85)             (103)
.... your index needs just as much space to be stored as the original data did.
This is what we've been trying to tell you, and your own example has shown it to be true.

But this was only an example of how the encoder works, not its compression value.  I told you before, this is for larger than 500k files.  You don't seem to understand, now matter how many times I say the same thing different ways, that this one code represents whatever size file you are working with.  If the file is 1024K or 1024 MB, it's still the same one crypto key.  Whether it's 1 MB or 100MB or 1000MB, it's just one 4k file.  You aren't listening.  Your only job appears to be to find ways to break my theory using small data sets of 3 bytes, when I clearly said many times this won't be used for data that small.  

Let's say I encode Apple's iCloud Installer, which is just under 50 MB in size (according to Google).  For every 1 byte of data, I need to index 100 - 150 indexes into Pi.  I am not converting anything or recording anything to do this movement.  I am simply moving forward according to some rules I figured out for creating a unique pathway through Pi.  So I would need (100*50,000) or 500,000 indexes into Pi at least.  Let's say that the last bit of data I find is at index location 501,500 into Pi.  Well, here is my crypto key:

[0.501500.8.5250]  I didn't have to record any other data that than.  Now that's placed into the 4K file that is used to tell the software how to reconstitute the data.  This is ALL the data there is in my finished file, plain and simple.  There is no hashing chunks of data as you describe, nothing like that.  I've created a path through Pi like taking footsteps on certain numbers.  The footsteps taken are irrelevant, what is relevant are the changes between those steps.  For 0s, we hop from the first 4 in Pi to every other 4 in Pi only.  But if we encounter a 1 in our binary data, then the Hunt Value increments +1, so now we are only hopping on every 5 in Pi.  This is what keeps the record of our original bits.  All the other numbers in Pi are useless as we are encoding.  Here is an example:   001      We would be looking for the first 4 in Pi then the 2nd 4 in Pi and now we must increment +1 and hop to the next 5 in Pi.  We keep hopping on 5s as long as our data is 0s, but if we encounter another 1, we increment and begin hopping along 6s.  In this way, our pathway is totally unique to the data we are feeding into it, and thus our arrival point (end point) can let us figure out the path taken to reach itself by knowing how many 1s were used and then attempting to solve backwards to the decimal point using the precise number of steps it took to encode it, the original file size recorded in bits.

Also, I'm not sure I should be saying 8 bits per byte, but my friend taught me 7 bits per byte when working with Ascii Binary, was I misinformed on that?  Remember, the theory works by looking at the data in a file as characters in a book, in Ascii format, and thus will need to encode precisely the same number of bits for every character.  But if you look at a hex and decimal and binary converter online, if you type in just one letter, you only get 3 bits or 4 bits or 2 bits ... some erratic bit size.  Every character must have the same bit size, so I want to translate the data into Ascii Binary.
B(asic)Miner (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
September 10, 2013, 09:55:43 AM
 #135


 I do believe this can be done , maybe not by op , maybe not anytime soon , but one day this idea will be done . 50 years ago whe had no internet , we created our own made up currency bitcoins on this internet because we were sick of our countries currency and that bitcoin is over 100$ each now , if anything bitcoin tought me it is everything is possible . Not now maybe , but one day.

Listen, if I sat down with a programmer who was able to listen to what I'm saying and not throw up objections that have no relevance to my method, and we could sit and work on this until it fits my theory exactly, then I know it would work.  You are right, I cannot do it myself, but I have been very clear about that from the very first post.  I have asked for help.  But it could be done soon.  Very very soon.  A few days for the encoding portion, it's just a file lines really.  The decoding portion would take a lot of brain work, research, testing, modification, etc ...  and we'd have to make our choices about how to resolve the solution of this software by seeing the results of those tests.  If I had the money to hire a coder myself, I would not have even come here at all.  Again, not asking for money now, but I am asking for someone to just TRY to do this with me.  How much time could it take to prove me wrong?  Maybe less than the time it's taking you to actually type out these responses to belittle me so you can go on believing everything is impossible.

Stop trying to push this off to ... someday, when now would be just as good.  Another person besides myself, his name is Philip Langdale, a programmer, has already created something like my idea here:  https://github.com/philipl/pifs

It is a method (like I've been telling all of you) to push data into Pi using a very hard mathematical solution.  It barely works because the encoding is just too slow.  But he has already done this!   You can download the app yourself and give it a try.  He is encrypting data into Pi.  But he's doing it the wrong way in my opinion.  But if you read his page, you will see that he has rational support behind that idea that every known file in the universe can fit inside of Pi.  Go read his arguments yourself, so you can stop beating me up for things you don't understand, despite being geniuses no doubt.  I want to work with you, I don't want to fight with you.  Thanks for your time.
ZephramC
Sr. Member
****
Offline Offline

Activity: 475
Merit: 255



View Profile
September 10, 2013, 10:03:33 AM
 #136

 You don't seem to understand, now matter how many times I say the same thing different ways, that this one code represents whatever size file you are working with.  If the file is 1024K or 1024 MB, it's still the same one crypto key.  Whether it's 1 MB or 100MB or 1000MB, it's just one 4k file.  

There are little more than 1.415461031044954789 x 1E9864 possible 4k files. These are ALL possible results of your compression. Although this number is extremely high, it is much lower than number of possible 1MB or 100MB or 1000MB files.
So how can you make a correspondence (1 to 1 connection) between a set of files and much larger set of files without any code repeating (correspondig to several different decompressed files) ??
Kazimir
Legendary
*
Offline Offline

Activity: 1176
Merit: 1011



View Profile
September 10, 2013, 10:09:12 AM
 #137

Stop trying to push this off to ... someday, when now would be just as good.  Another person besides myself, his name is Philip Langdale, a programmer, has already created something like my idea here:  https://github.com/philipl/pifs
You do realize that this whole "Pi FileSystem" was a practical joke, right?

Yes, it works, in theory, for 0.000000001% (prolly much less) of all possible input files. But it's way, way, WAAAAY too slow for practical usage AND it doesn't work for the remaining 99.999999999% (prolly much more) input files.

And "too slow" in this context is not just a matter of requiring faster processors, it's "too slow" as in "takes longer than the age of the universe to process".

In theory, there's no difference between theory and practice. In practice, there is.
Insert coin(s): 1KazimirL9MNcnFnoosGrEkmMsbYLxPPob
murraypaul
Sr. Member
****
Offline Offline

Activity: 476
Merit: 250


View Profile
September 10, 2013, 10:10:59 AM
 #138

But this was only an example of how the encoder works, not its compression value.  I told you before, this is for larger than 500k files.  You don't seem to understand, now matter how many times I say the same thing different ways, that this one code represents whatever size file you are working with.  If the file is 1024K or 1024 MB, it's still the same one crypto key.  Whether it's 1 MB or 100MB or 1000MB, it's just one 4k file.  You aren't listening.  Your only job appears to be to find ways to break my theory using small data sets of 3 bytes, when I clearly said many times this won't be used for data that small.  

You don't seem to understand when I try to explain very clearly why your process will not work.
You've even seen a joke page created by someone else with broadly the same process, and another page explaining why it will not work.

Quote
Let's say I encode Apple's iCloud Installer, which is just under 50 MB in size (according to Google).  For every 1 byte of data, I need to index 100 - 150 indexes into Pi.  I am not converting anything or recording anything to do this movement.  I am simply moving forward according to some rules I figured out for creating a unique pathway through Pi.  So I would need (100*50,000) or 500,000 indexes into Pi at least.  Let's say that the last bit of data I find is at index location 501,500 into Pi.  Well, here is my crypto key:

[0.501500.8.5250]  I didn't have to record any other data that than.  Now that's placed into the 4K file that is used to tell the software how to reconstitute the data.

a) 50,000 is 50KB, not 50 bytes. 50MB is 50*1024*1024, not 50*1000.

b) And what if you don't find the last bit of data until index location 501,500,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000...(repeat up until 50MB)?

You entire process only works if you assume you can find all possible index locations in less space that the original file took.
You cannot do this.

Quote
 This is ALL the data there is in my finished file, plain and simple.  There is no hashing chunks of data as you describe, nothing like that.  I've created a path through Pi like taking footsteps on certain numbers.  The footsteps taken are irrelevant, what is relevant are the changes between those steps.  For 0s, we hop from the first 4 in Pi to every other 4 in Pi only.  But if we encounter a 1 in our binary data, then the Hunt Value increments +1, so now we are only hopping on every 5 in Pi.  This is what keeps the record of our original bits.  All the other numbers in Pi are useless as we are encoding.  Here is an example:   001      We would be looking for the first 4 in Pi then the 2nd 4 in Pi and now we must increment +1 and hop to the next 5 in Pi.  We keep hopping on 5s as long as our data is 0s, but if we encounter another 1, we increment and begin hopping along 6s.  In this way, our pathway is totally unique to the data we are feeding into it, and thus our arrival point (end point) can let us figure out the path taken to reach itself by knowing how many 1s were used and then attempting to solve backwards to the decimal point using the precise number of steps it took to encode it, the original file size recorded in bits.

And what people keep telling you, but you refuse to accept, is that on average, the index required to store the final position will be as large as the file you are trying to store.

BTC: 16TgAGdiTSsTWSsBDphebNJCFr1NT78xFW
SRC: scefi1XMhq91n3oF5FrE3HqddVvvCZP9KB
Kazimir
Legendary
*
Offline Offline

Activity: 1176
Merit: 1011



View Profile
September 10, 2013, 10:13:57 AM
 #139

You don't seem to understand, now matter how many times I say the same thing different ways, that this one code represents whatever size file you are working with.  If the file is 1024K or 1024 MB, it's still the same one crypto key.  Whether it's 1 MB or 100MB or 1000MB, it's just one 4k file.  

There are little more than 1.415461031044954789 x 1E9864 possible 4k files. These are ALL possible results of your compression. Although this number is extremely high, it is much lower than number of possible 1MB or 100MB or 1000MB files.
So how can you make a correspondence (1 to 1 connection) between a set of files and much larger set of files without any code repeating (correspondig to several different decompressed files) ??
This. You're compressing multiple different input files to the same 4K crypto key. When decompressing a 4K crypto key, how do you determine the result, since there are multiple (or actually: infinite!) possible outcomes.

You seem to be missing this critical point, B(asic)Miner. There are infinitely more files larger than 4K, than there are 4K crypto keys.

In theory, there's no difference between theory and practice. In practice, there is.
Insert coin(s): 1KazimirL9MNcnFnoosGrEkmMsbYLxPPob
ZephramC
Sr. Member
****
Offline Offline

Activity: 475
Merit: 255



View Profile
September 10, 2013, 10:14:48 AM
 #140

But if you read his page, you will see that he has rational support behind that idea that every known file in the universe can fit inside of Pi.

Yes. Every known file in the universe is somewhere inside of Pi. (As a matter of fact, even every finite-length unknown file is located inside Pi. That means every wikipedia article that will be published, description of every invention that will be invented on Earth or elsewhere, every novel, short-story or book that was, is or will be written. Incuding all variants of ending, all misspellings, etc.)

But no matter how you traverse the Pi you always need a starting point, starting index, first decimal place to begin with. (Or maybe you need last index, last step of your path for recreating the path from finish to the start.) Obviously simplest way is to just proceed sequentially, start eg. with index 12345678 and proceed to 12345679, 12345680, 12345681, ... But even if you traverse or backtrack Pi some different algorithmically predefined way, you need that starting point.

Number of starting points can not be smaller than the number of possible compressible files.
Pages: « 1 2 3 4 5 6 [7] 8 9 10 11 12 13 14 15 16 17 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!