Bitcoin Forum
August 15, 2024, 08:19:26 PM *
News: Latest Bitcoin Core release: 27.1 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 [10] 11 12 13 14 15 16 17 »  All
  Print  
Author Topic: Crypto Compression Concept Worth Big Money - I Did It!  (Read 13882 times)
balanghai
Sr. Member
****
Offline Offline

Activity: 364
Merit: 253


View Profile
September 11, 2013, 01:11:18 PM
 #181

Now I got a programmer in my team, I just wonder what is the workflow of the compression(reduction) or encoding or whatever term is appropriate?

Can you give us a simple workflow like in this format:

Begin -> Analyze -> Encode -> etc. -> etc. -> End


Best Regards,
Balanghai
Mota
Legendary
*
Offline Offline

Activity: 804
Merit: 1002


View Profile
September 11, 2013, 01:17:02 PM
 #182



I would LOVE to take your bet, I really would, but I am not a programmer, nor do I have elite math.  

That is exactly your problem. You are right, you can just index everything in pi or any other endless continuing non periodic number. BUT like everyone here told you, it is simply not possible to do that in an efficient manner.
I will give you a little example:
Say you have a file the size of 10 MB: now you read out the actual 0's and 1's in it and search pi (or any other) for the representing string and store the starting and end index in your file. So far so good, now you have a file which is only the length of the starting and end index. That is really small, you are right. BUT you have to get the corresponding string in pi, which takes A LOT of time to find first.
AND everyone who wants to recreate the file has to compute pi UNTIL he is at the corresponding string again. And that is the problem here.

Let's say you have around 500mb of data!  If you round it a little you have a corresponding string of 4*10^9 bits length.  Now, to make that a little more efficient we convert it into the dezimal system, since a string of 0's and 1's that long would be way too long to search in a dezimal number.
To make it easier say 10^9 binary is 10^3 in dezimal. so now you have a string of numbers which has a length of 4000. Now, you need to find this string in an infinite number. Now you could use BBP to fasten your search over multiple processors and you would still search a VERY long time for a corresponding number.  

I just saw your last explanation. But that changes pretty much nothin on my statement above.That is not a new theory and it is proven to be not efficient in first year university information technologies iirc.
B(asic)Miner (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
September 11, 2013, 01:22:05 PM
 #183

The biggest question is not whether a computer can do the "compression" but whether it is possible to reconstruct the file.

Here are the chains of digit positions for just 4 bits:
Pi:    314159265358979323846264338327950288419716939937510
0000     4                4   4            4
0001     4                4   4       5
0010     4                4           5                5
0011     4                4           5         6
0100     4 5   5 5
0101     4 5   5           6
0110     4 5  6            6
0111     4 5  6     7
1000       5   5 5                    5
1001       5   5 5         6
1010       5   5           6 6
1011       5   5           6        7
1100       5  6            6 6
1101       5  6            6        7
1110       5  6     7               7
1111       5  6     7    8

As you can see, the bit combinations 0101, 0110 and 1001 all lead to the same ending digit position. The same is true for 0001/1000, 1010/1100 and 1011/1101/1110. This means that the algorithm's output of a file starting with one of the bit patterns in such a group is indistinguishable from the output if the file had started with one of the other bit patterns in the group.

It follows that it is impossible to uniquely decompress the output.

Don't take it personally, but your scheme is not usable.

Onkel Paul

This is totally awesome work, Onkel Paul, I truly appreciate this kind of thinking (I hadn't thought of a chart like this to help visualize this), and that you also understand the rules thus far.  

The only critique I will make of this is that again, the theory is not meant for small data sizes, and using an example like this might not work to describe the theory's effectiveness because as the size of the file diminishes, uniqueness diminishes with it.  And a 1-byte example is also small than the 4k output size of the file in question, meaning it's inversely efficient to my design proposal.  

Here is why:

Imagine if I take 10 hops.  In just 10 hops (which is essentially the beginning of Pi) we are bound to hop over the same data with various 4 bit examples because there hasn't been enough room to yet establish uniqueness.  If that's the case, perhaps we need to capture the first 64bytes from the original data to overcome this problem.  The output could look like this:

[OPENFILE]
[FileSplitSize=1GB]
[filesize=8000000_&_Name="8MBofStuffForWhomever.zip"]
[BaseKey = 01101100110110011100000110000101101011111010..... to 64k]
[MChunks=1_&_REM:  Begin Chunk(s) on Next Line! ]
[1,w, xxxxxx, y, zzzz]
[CLOSEFILE]

The Basekey size might have to be at least 64 bytes I think in order to get around the counting problem observed in your arguments thus far ....  So that would mean the first 64 bytes would get added to the file, increasing it from 4 to 64k.  That's sad, but still doable.  The longer the data stream, the more unique the outcome.

Now back to WHY:

A byte is 8 bits long.  But traveling into Pi initially, we are overlapping due to there not having been enough time to uncover some 1s in the data, which are far more efficient than 0s.   Look at 1111 in your table quoted above.  It stops way shorter than the other 4-bit sequences you've shown there.  We must incorporate enough initial distance into the program to account for there being no possible overlap.  As we travel further into Pi (10 bytes in, 20 bytes in) a larger distribution of 1s in the software we are encoding would have begun to compress that timeline, making it more efficient, while the same sized-but-non-identical file comprised of more 0s would have gone further into Pi, meaning all of its branches would be on roads not possible to be taken by the other version of the file.  These small size examples can be counterintuitive to the concept.  But now thanks to you, I see that as we start from the decimal, there are too many possible overlaps, and thus we need to capture enough of the first part to account for those overlaps, however much that turns out to be.  Because at some point, it will be that the programs taken enough twists and turns to have brought it into a different trail entirely by the number of hops taken.

In other words, we need someone brainiac-like to calculate the following:

Given 8 bits to a byte, given 0s being less efficient (using my theory) in Pi than 1s, how far would we have to go in Pi before the number of hops taken would create a unique remaining road?

Here is an example for you:  

 1 byte               Index Ending in Pi At:  
00100010                      103              (the least effective 4-bit sequence doubled thru Pi)
11111111                       67               (the most efficient 4-bit sequence doubled thru Pi)

At some point along our path thru Pi, that while we may still hop over the same digits, the number of hops taken to get there will have differed, and that means we can now break our timeline with two different files of the same size.  Breaking the timeline means that route through Pi cannot produce our file again, meaning its more likely that there will be only one unique route back to our BaseKey.

You guys are becoming quite awesome now, even the hecklers are starting to turn around. Maybe one day I'll even convince PaulMurray there is something here worth looking at.  Until then, I must keep my nose down and incorporate your teachings and ideas.  THANK YOU GUYS!
b!z
Legendary
*
Offline Offline

Activity: 1582
Merit: 1010



View Profile
September 11, 2013, 01:33:06 PM
 #184

Teleportation Concept Worth Big Money - I Did It!

I am a normal guy etc etc and I thought of a way for teleportation. I can't prove it works since I can't code, but the idea seems to work in theory!
This brilliant idea can change the world! It can be used for space travel, transportation and so much more.

1. Scan human
2. Convert atoms into code
3. Compress using B(asic)Miner's method, since there is too much human code
4. Send 1 line of code to the receiving end
5. Decompress and rebuild human

PLEASE GIVE ME A CHANCE, I JUST NEED SOMEONE TO BELIEVE IN ME
OnkelPaul
Legendary
*
Offline Offline

Activity: 1039
Merit: 1005



View Profile
September 11, 2013, 01:41:06 PM
 #185

But the problem does not go away when more bits are "encoded", it only gets worse!
After just 4 bits, the encoding machinery is in the same state for some bit patterns, and it has therefore lost all information about which of these patterns it had encoded. The same is true for any 4-bit (or longer) subsequence in the input file. If you have two files that are identical up to that sequence, the encoder will be in the same state with both of them before encoding that sequence, and there will be groups of possible bit values that will all leave it in the same ending state after the sequence.
Going further into Pi before starting the process does not help at all, the distribution of decimal digits is uniform.
Adding the first 64 (or 64k, what's a factor of 1024 between friends?) bytes to the encoded file does not help either - it just shifts the point where the non-uniqueness problem appears 64 or 64k bytes into the file.

Please realize that wishful thinking does not heal a scheme that's fundamentally broken.

Onkel Paul

(I know I'm too old to be trolled, but at least it'll increase my activity, which is at least something...)

Mota
Legendary
*
Offline Offline

Activity: 804
Merit: 1002


View Profile
September 11, 2013, 01:49:21 PM
 #186


Please realize that wishful thinking does not heal a scheme that's fundamentally broken.

Onkel Paul

(I know I'm too old to be trolled, but at least it'll increase my activity, which is at least something...)

You made my day. Thank you.
B(asic)Miner (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
September 11, 2013, 01:49:59 PM
Last edit: September 11, 2013, 02:05:47 PM by B(asic)Miner
 #187



I would LOVE to take your bet, I really would, but I am not a programmer, nor do I have elite math.  

That is exactly your problem. You are right, you can just index everything in pi or any other endless continuing non periodic number. BUT like everyone here told you, it is simply not possible to do that in an efficient manner.
I will give you a little example:
Say you have a file the size of 10 MB: now you read out the actual 0's and 1's in it and search pi (or any other) for the representing string and store the starting and end index in your file. So far so good, now you have a file which is only the length of the starting and end index. That is really small, you are right. BUT you have to get the corresponding string in pi, which takes A LOT of time to find first.
AND everyone who wants to recreate the file has to compute pi UNTIL he is at the corresponding string again. And that is the problem here.

Let's say you have around 500mb of data!  If you round it a little you have a corresponding string of 4*10^9 bits length.  Now, to make that a little more efficient we convert it into the dezimal system, since a string of 0's and 1's that long would be way too long to search in a dezimal number.
To make it easier say 10^9 binary is 10^3 in dezimal. so now you have a string of numbers which has a length of 4000. Now, you need to find this string in an infinite number. Now you could use BBP to fasten your search over multiple processors and you would still search a VERY long time for a corresponding number.  

I just saw your last explanation. But that changes pretty much nothin on my statement above.That is not a new theory and it is proven to be not efficient in first year university information technologies iirc.

I encourage newcomers to the party, I'm all in.  But you would need to go back and re-read a large portion of this starting from about page 6 forward until now and then you will see that what you're talking about is compression, and thanks to BurtW, what I now know this to be called is creating a Encoder/Decoder that allows you to pull data out of Pi with a Meta Data keyfile of between 4k and 64k.  


BALANGHAI

Essentially, this is the entire process:

1) Open file to processed.  Analyze the data.  Then we open our file and begin to record the following:
   A) Original Filename.  Size of the Original File.  Size of Pi Index Used (how big each chunk is to be split into).  Size of the last Mega Chunk in bits (if applicable).
   B) Basekey (the first 64 bytes of the original file, giving the program enough room to establish a unique path given a number of hops.
  
2) Begin reading the data one character at a time (converting hex to Ascii Binary) all in memory using the loaded Pi Index to the Pi Index Size shown in example A above.  Convert everything (all file contents) to Ascii Binary (so every incoming piece of data is exactly 8 bits).  Begin moving forward through Pi by hopping on a single solitary digit called a "Hunt Value" meaning the number we are to be hopping on in Pi.  Starting from the decimal point, begin encoding by hopping.  Hop to the 1st 4 in Pi if our first bit is a 0.  If it's a 1, hop to the first 5 in Pi.  Hop along Pi, encoding 0s and 1s using these rules.  0 = no change in Hunt Value.  If you start with a 4, you keep searching for the next 4.  1 = +1 to Hunt Value in Pi.  I'm sure this can be done in realtime with data, you are just moving along, not having to do any hard math at all.  Computers were made for this kind of math, it's the most basic, so our encoding would be lightning fast.

3) When we reach our size limit for our first chunk and there is more data to be read in, we open our file and write our first MetaData Key.
   C) [1.x.yyyy.z]   (for those who have been following, we no longer need the first bit, since we have added something called the BaseKey, a 64 bit record of the initial string of the file.)
Keep encoding until all the data is complete.  Record the size of the last chunk into the file record in bits, and add 1s from that point out so the last chunk is equal to the other chunks exactly.  During decompression, the program will compare the data split size to the last chunk's size, and remove the 1s automatically when it comes time to write out the last file.
Close the file out and dump the Pi Index from memory, clearing up the computer.


DECODING:
Start from the Ending Index Location.  Begin hunting through Pi using all combinations of 1s and 0s in some systematic or mathematical way, looking for efficiency and intelligence in solving how to do it the most quick possible way, to find the one path through Pi that ends at our Ending Point exactly using the same exact number of steps, and calculating intelligently the number of 1s encoded in the formula using the Flip Count information (so we don't waste time searching for paths that have more 1s than are even in the timeline we've created.  Since any fork in the road will shrink and compress a given file, the overall chance of that file having the same 64-bit Base Key, number of steps to reach the end goal, and same number of 1s, would be significantly reduced.  Add to that, since we are not encoding the file all the way into Pi for one single MetaData Key, but splitting the files up into more and more MetaData Keys, the chances of the file's uniqueness will keep going up with more divisions, while the number of Metadata keys added to our final file output would create a larger and larger file.

It my goal to be able to make this work with 500k files up to 5 TB files using this system if it can be made to work.
B(asic)Miner (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
September 11, 2013, 01:57:39 PM
 #188

Teleportation Concept Worth Big Money - I Did It!

I am a normal guy etc etc and I thought of a way for teleportation. I can't prove it works since I can't code, but the idea seems to work in theory!
This brilliant idea can change the world! It can be used for space travel, transportation and so much more.

1. Scan human
2. Convert atoms into code
3. Compress using B(asic)Miner's method, since there is too much human code
4. Send 1 line of code to the receiving end
5. Decompress and rebuild human

PLEASE GIVE ME A CHANCE, I JUST NEED SOMEONE TO BELIEVE IN ME

Okay, guys, you got me.  Good one.  Although I did say in my first post (jokingly) that this could work for that one day, when the technology reached a high enough level.  And how do you think we are going to ever get there, if we don't at least dream and attempt to do that?  Maybe my idea seems ludicrous to you, but I'm sincere, I'm trying to do something cool, and its the best I can do.  I'm sorry I'm not Bill Gates.  Oh, sorry, I forgot, he was only a businessman who ripped off everyone else and gave himself credit for it.  But I admit he's a lot better than me.  In fact, I have to sit here and use the computer his company made and the OS his company made to write all of this, so that pretty much settles it, right?

I don't deserve to even try, is that?  I should live in China teaching children my whole life, making $1400 a month, and never dream of anything, just stare into space my whole life.  At least a few of you seem to care about what I'm trying to do.  I appreciate that.  It's hard having one's best efforts shot down.  I never said I was smarter or as smart as anyone here.  I just want to try.  TRY, damn it all.  It's so fun, and it might make a difference one day.  Which means my life might account for something.  It's not my fault I was poor all my life, and taught to hate school until I was almost 35 years old, by then it was too late. 

Life is harsh.
b!z
Legendary
*
Offline Offline

Activity: 1582
Merit: 1010



View Profile
September 11, 2013, 02:04:23 PM
 #189

Teleportation Concept Worth Big Money - I Did It!

I am a normal guy etc etc and I thought of a way for teleportation. I can't prove it works since I can't code, but the idea seems to work in theory!
This brilliant idea can change the world! It can be used for space travel, transportation and so much more.

1. Scan human
2. Convert atoms into code
3. Compress using B(asic)Miner's method, since there is too much human code
4. Send 1 line of code to the receiving end
5. Decompress and rebuild human

PLEASE GIVE ME A CHANCE, I JUST NEED SOMEONE TO BELIEVE IN ME

Okay, guys, you got me.  Good one.  Although I did say in my first post (jokingly) that this could work for that one day, when the technology reached a high enough level.  And how do you think we are going to ever get there, if we don't at least dream and attempt to do that?  Maybe my idea seems ludicrous to you, but I'm sincere, I'm trying to do something cool, and its the best I can do.  I'm sorry I'm not Bill Gates.  Oh, sorry, I forgot, he was only a businessman who ripped off everyone else and gave himself credit for it.  But I admit he's a lot better than me.  In fact, I have to sit here and use the computer his company made and the OS his company made to write all of this, so that pretty much settles it, right?

I don't deserve to even try, is that?  I should live in China teaching children my whole life, making $1400 a month, and never dream of anything, just stare into space my whole life.  At least a few of you seem to care about what I'm trying to do.  I appreciate that.  It's hard having one's best efforts shot down.  I never said I was smarter or as smart as anyone here.  I just want to try.  TRY, damn it all.  It's so fun, and it might make a difference one day.  Which means my life might account for something.  It's not my fault I was poor all my life, and taught to hate school until I was almost 35 years old, by then it was too late. 

Life is harsh.

You are right, life is harsh. Nobody will believe people like you and me. They think we are fools, or clowns.
We must follow the examples of great pioneers like the Wright brothers or Thomas Edison. We will change the future, and we will be remembered.

Fall seven times, stand up eight.  ~Japanese Proverb
Mota
Legendary
*
Offline Offline

Activity: 804
Merit: 1002


View Profile
September 11, 2013, 02:14:14 PM
 #190

DECODING:
Start from the Ending Index Location.  Begin hunting through Pi using all combinations of 1s and 0s in some systematic or mathematical way, looking for efficiency and intelligence in solving how to do it the most quick possible way, to find the one path through Pi that ends at our Ending Point exactly using the same exact number of steps, and calculating intelligently the number of 1s encoded in the formula using the Flip Count information (so we don't waste time searching for paths that have more 1s than are even in the timeline we've created.  Since any fork in the road will shrink and compress a given file, the overall chance of that file having the same 64-bit Base Key, number of steps to reach the end goal, and same number of 1s, would be significantly reduced.  Add to that, since we are not encoding the file all the way into Pi for one single MetaData Key, but splitting the files up into more and more MetaData Keys, the chances of the file's uniqueness will keep going up with more divisions, while the number of Metadata keys added to our final file output would create a larger and larger file.

It my goal to be able to make this work with 500k files up to 5 TB files using this system if it can be made to work.

Like I said before. This changes nothing. there are several factors that condemn this to fail. Even if you could solve the uniqueness problem, which you can't, there are still several factors. your "chunks" are good if you want to shorten the jumptime; you need a chunk everytime you reach a certain value, say 33 for the heck of it. so pretty much every 33th bit you need to write the corresponding chunk in your file. the higher the value the longer you have to search through pi.
So, now here is the problem: you want to continously iterate through pi, which is fine by itself - BUT you have to store all that in a memory. you could use certain formulas which enable you to jump to a specific place in pi to shorten the amount of time you need and memory used, but we are talking about a supercomputer here to effectively iterate trough pi, get the corresponding values, write them in a file, remember where you where/use a formula to get there again, do that over and over again.

Don't misunderstand me - You had a really nice idea. But you are not the first to try that. The main factor of a Decoder/Encoder is the speed. You don't have any speed.

To get you to understand this is pretty hard, you don't seem to read the numbers correctly.
I told you that 500mb is a string with 4*10^9 numbers. 5TB?!?
That is a string with 4*10^13 numbers. You need to iterate through pi at least forthy TRILLION times, and that is IF all the hunting values are right behind each other!
This won't change even if you always start from the top of pi!
B(asic)Miner (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
September 11, 2013, 02:19:05 PM
 #191

Teleportation Concept Worth Big Money - I Did It!

I am a normal guy etc etc and I thought of a way for teleportation. I can't prove it works since I can't code, but the idea seems to work in theory!
This brilliant idea can change the world! It can be used for space travel, transportation and so much more.

1. Scan human
2. Convert atoms into code
3. Compress using B(asic)Miner's method, since there is too much human code
4. Send 1 line of code to the receiving end
5. Decompress and rebuild human

PLEASE GIVE ME A CHANCE, I JUST NEED SOMEONE TO BELIEVE IN ME

Okay, guys, you got me.  Good one.  Although I did say in my first post (jokingly) that this could work for that one day, when the technology reached a high enough level.  And how do you think we are going to ever get there, if we don't at least dream and attempt to do that?  Maybe my idea seems ludicrous to you, but I'm sincere, I'm trying to do something cool, and its the best I can do.  I'm sorry I'm not Bill Gates.  Oh, sorry, I forgot, he was only a businessman who ripped off everyone else and gave himself credit for it.  But I admit he's a lot better than me.  In fact, I have to sit here and use the computer his company made and the OS his company made to write all of this, so that pretty much settles it, right?

I don't deserve to even try, is that?  I should live in China teaching children my whole life, making $1400 a month, and never dream of anything, just stare into space my whole life.  At least a few of you seem to care about what I'm trying to do.  I appreciate that.  It's hard having one's best efforts shot down.  I never said I was smarter or as smart as anyone here.  I just want to try.  TRY, damn it all.  It's so fun, and it might make a difference one day.  Which means my life might account for something.  It's not my fault I was poor all my life, and taught to hate school until I was almost 35 years old, by then it was too late. 

Life is harsh.

You are right, life is harsh. Nobody will believe people like you and me. They think we are fools, or clowns.
We must follow the examples of great pioneers like the Wright brothers or Thomas Edison. We will change the future, and we will be remembered.

Fall seven times, stand up eight.  ~Japanese Proverb

Yeah, exactly.  Do you know Gustav Whitehead?   No?  Nobody does, but some people think he invented portions of the airplane before the Wright Brothers, who were credited with having invented the airplane, but actually they only improved upon a number of other inventions done by others, by putting it all together.  Nobody remembers Whitehead, but he and others like him paved the way.  The problem is, they gave up ... and the Wright Brothers didn't.  And the rest ... is history.
Kazimir
Legendary
*
Offline Offline

Activity: 1176
Merit: 1011



View Profile
September 11, 2013, 02:25:29 PM
 #192

All I did was invent the process that needs to be taken and implemented.
I hate to destroy your dreams, but your idea is fundamentally flawed. As has been pointed out repeatedly in this topic.

In your first post you wrote:
my solution for compressing 99.8% of all data out of a file, leaving only a basic crypto key containing the thread of how to re-create the entire file from scratch.
This is simply not possible. Not a matter of requiring faster computers, more memory, or better technology, but a matter of theoretically, mathematically provable, fundamentally, logically impossible.

You keep beating around the bush and getting lost in irrelevant details, thus deluding others (and possibly yourself too) from the very clear, simple fact that you are missing a crucial point.

C) The index isn't included in the file we save, that would be stupid, since it would do just as you say, include a lot of data we don't need to even include.  The index is Pi, a known number anyone who programs can program in moments to auto generate, in less than 51 bytes I'm told.  So our co/deco would include it built in to generate our index is RAM.  The part you're not getting (and again, I don't blame you, I WANT you to understand because that would be awesome!) is that we don't include any index in our output file, THAT'S WHY IT CAN BE 4K.   The fact that we used an index in Pi that was 16 million indexes long has nothing to do with our final file size, because we are just writing down the Index point itself.
Misuse of terminology causes some confusion here, what you call 'index point' is what others call 'index'.

Using your terminology, let me rephrase the essential flaw in your approach: for pretty much ANY file (aside from 0.00000000001% very lucky cases), the index point (or 'crypto key' or however you call it) required to re-create the original file, will require MORE data than the original file itself.

The fact that you can re-create Pi (or other irrational numbers or infinite data sets that contain any possible file in existence) in just a few bytes, does not change this fact in any way.

In theory, there's no difference between theory and practice. In practice, there is.
Insert coin(s): 1KazimirL9MNcnFnoosGrEkmMsbYLxPPob
Kazimir
Legendary
*
Offline Offline

Activity: 1176
Merit: 1011



View Profile
September 11, 2013, 02:31:04 PM
 #193

Yeah, exactly.  Do you know Gustav Whitehead?   No?  Nobody does, but some people think he invented portions of the airplane before the Wright Brothers, who were credited with having invented the airplane, but actually they only improved upon a number of other inventions done by others, by putting it all together.  Nobody remembers Whitehead, but he and others like him paved the way.  The problem is, they gave up ... and the Wright Brothers didn't.  And the rest ... is history.
The stuff that Whitehead invented was not theoretically fundamentally logically impossible. It just couldn't be done in practice at his time, because human technology was not advanced enough yet.

Your idea on the other hand, can be mathematically proven to be wrong. No matter what kind of smart thinking, advanced technology, quantum computers, zero-point energy, or other hypothetical fancy stuff we'll have at hand in the future. Math doesn't lie.

In theory, there's no difference between theory and practice. In practice, there is.
Insert coin(s): 1KazimirL9MNcnFnoosGrEkmMsbYLxPPob
B(asic)Miner (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
September 11, 2013, 02:35:00 PM
 #194

DECODING:
Start from the Ending Index Location.  Begin hunting through Pi using all combinations of 1s and 0s in some systematic or mathematical way, looking for efficiency and intelligence in solving how to do it the most quick possible way, to find the one path through Pi that ends at our Ending Point exactly using the same exact number of steps, and calculating intelligently the number of 1s encoded in the formula using the Flip Count information (so we don't waste time searching for paths that have more 1s than are even in the timeline we've created.  Since any fork in the road will shrink and compress a given file, the overall chance of that file having the same 64-bit Base Key, number of steps to reach the end goal, and same number of 1s, would be significantly reduced.  Add to that, since we are not encoding the file all the way into Pi for one single MetaData Key, but splitting the files up into more and more MetaData Keys, the chances of the file's uniqueness will keep going up with more divisions, while the number of Metadata keys added to our final file output would create a larger and larger file.

It my goal to be able to make this work with 500k files up to 5 TB files using this system if it can be made to work.

Like I said before. This changes nothing. there are several factors that condemn this to fail. Even if you could solve the uniqueness problem, which you can't, there are still several factors. your "chunks" are good if you want to shorten the jumptime; you need a chunk everytime you reach a certain value, say 33 for the heck of it. so pretty much every 33th bit you need to write the corresponding chunk in your file. the higher the value the longer you have to search through pi.
So, now here is the problem: you want to continously iterate through pi, which is fine by itself - BUT you have to store all that in a memory. you could use certain formulas which enable you to jump to a specific place in pi to shorten the amount of time you need and memory used, but we are talking about a supercomputer here to effectively iterate trough pi, get the corresponding values, write them in a file, remember where you where/use a formula to get there again, do that over and over again.

Don't misunderstand me - You had a really nice idea. But you are not the first to try that. The main factor of a Decoder/Encoder is the speed. You don't have any speed.

Well, if you mean the time needed to load 2GB of the Pi Index into memory, I assume that would take 1 minute or so to, upon starting the software, generate the Pi Index (of that size) needed.  Who cares if it's in memory?  I don't.  Besides, who knows if I will end up using 2GB, it could end up being 500MB of Pi, with smaller chunks.  First we need to just get a basic 2 meg version running, and try to encode files between 500K and 1 MB and just see what happens.  

But if I get this working, imagine the time saved over the internet to send your friends or family the 20GB of videos taken on your vacation.  You would be sending a small file 4-64k large, in their email in moments.  Then they'd decompress out the videos overnight, while sleeping perhaps.  Wake up, the videos are there.  The internet did not need to be congested with all of those 0s and 1s.  And if a lot of people were using it, the internet would work more and more smoothly all the time.  Think of the difference!

Another thing is, you still can't convince me that just because it's possible to have 2^N files TO encode, that there are that many unique files.  For all we know, our research into this could reveal a fundamental characteristic of Nature to only allow organized patterns to assemble in random data at a given ratio, like the Golden Means (8/7 is it)?  Just trying to solve this problem itself could lead to a breakthrough.  What if there are only (2^N)/10 file in existence of each size, and they already happen to be written in Nature?  It would mean all of our ideas already exist in the Global Intelligence of Nature and that our brains are receiving the information via DNA alterations that come from eating plants.  Because science just recently confirmed that by eating some plants, our DNA is altered, and they hypothesize that new ideas come from this phenomenon.  If Nature is the one providing original ideas, it stands to reason the answer is already in Nature for every created thing.

I don't wish to go too deepy into philosophy here, though.  Just saying, you don't know for sure that even though there are 2^N files possible, that all of those combinations will ever be used to organize intelligent data into a file that ends up on someone's computer that could be put through my system.  In that case, there might be all the uniqueness I'll ever need.  Won't know until we try it and watch it either (Get busted) or work as conceived.  
Kazimir
Legendary
*
Offline Offline

Activity: 1176
Merit: 1011



View Profile
September 11, 2013, 02:43:48 PM
 #195

B(asic)Miner, please answer this:

You claim for any input file, you can create a 4K 'crypto key', from which you can re-create the original file.

Suppose we take two different files that are each 5K large. Let's call them A and B. Through some smart process (whose technical details are irrelevant for now) we calculate their corresponding crypto keys which are 4K each, i.e. two pieces of 4K data. Let's call these P and Q. Now, if it's possible (by means of some other smart process that may or may not involve generating Pi or whatever) to reconstruct A from P, and B from Q, do you agree that if A and B are different, then P and Q must be different as well?

(If not, i.e. if P = Q, then reconstructing the original file from P (or Q, which is the same) will either result in A, thus we can never re-create B, or it results in B, thus we can never re-create A)


In theory, there's no difference between theory and practice. In practice, there is.
Insert coin(s): 1KazimirL9MNcnFnoosGrEkmMsbYLxPPob
B(asic)Miner (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
September 11, 2013, 02:51:19 PM
 #196

All I did was invent the process that needs to be taken and implemented.
I hate to destroy your dreams, but your idea is fundamentally flawed. As has been pointed out repeatedly in this topic.

Using your terminology, let me rephrase the essential flaw in your approach: for pretty much ANY file (aside from 0.00000000001% very lucky cases), the index point (or 'crypto key' or however you call it) required to re-create the original file, will require MORE data than the original file itself.

Okay, then I ask you to explain to me how that works.  Because it sounds crazy to me, it sounds backwards, it sounds wrong.  So I must not understand what you understand.  So teach me, if you will, so I can try to understand where you're coming from with this.  

For example, let's say I want to send you a quote from Alice in Wonderland, but both you and I have the book in our library.  So instead of me trying to send you the PDF thru the internet, I think, hey, why don't I just send a reference?  (What you're calling the Index)

So I write:  "Alice in Wonderland, page 15, lines 1 through 3."

Then I think why send all of that when I can just send this:   "AlceInWndrLnd... p15, Lns 1-3"   so now my Index (cryptokey, whatever) is just that (and it's only 28 characters long).  

You open the book and find the 1st paragraph.  It's 3 lines long.    It has 52 words, or approximately 280 characters.  That means that my reference (index, cryptokey --whatever) is only 10% of the size the data you are now reading is.  I've just saved 90% of the data over the internet than I would had I just copied the text off the internet directly.  Sure, it took you a minute longer to go find the book and bring it off the shelf, open to the page, etc... but my fundamental concept is just the same as this.

Now explain how my 28 character is somehow supposed to also include the book + the 28 characters over the internet when I clearly never sent the book!  Its for this reason, I don't think you understand how my theory works fully yet.  Not a flame, though, trust me.  I want to both understand you, and you to understand me before all hope is ruled out ...
Kazimir
Legendary
*
Offline Offline

Activity: 1176
Merit: 1011



View Profile
September 11, 2013, 02:55:13 PM
 #197

Now explain how my 28 character is somehow supposed to also include the book
*sigh*... that's not what I'm saying.

Yes, for these particular 3 lines, this "Alice in Wonderland based encoding (or compression) scheme" would work.

However, for 99.99999999999999999999999999999% of all lines in existence, it wouldn't. Or it would require more index points or more references throughout the book, ending up taking more space than the actual line itself.

In theory, there's no difference between theory and practice. In practice, there is.
Insert coin(s): 1KazimirL9MNcnFnoosGrEkmMsbYLxPPob
Mota
Legendary
*
Offline Offline

Activity: 804
Merit: 1002


View Profile
September 11, 2013, 02:58:53 PM
 #198

DECODING:
Start from the Ending Index Location.  Begin hunting through Pi using all combinations of 1s and 0s in some systematic or mathematical way, looking for efficiency and intelligence in solving how to do it the most quick possible way, to find the one path through Pi that ends at our Ending Point exactly using the same exact number of steps, and calculating intelligently the number of 1s encoded in the formula using the Flip Count information (so we don't waste time searching for paths that have more 1s than are even in the timeline we've created.  Since any fork in the road will shrink and compress a given file, the overall chance of that file having the same 64-bit Base Key, number of steps to reach the end goal, and same number of 1s, would be significantly reduced.  Add to that, since we are not encoding the file all the way into Pi for one single MetaData Key, but splitting the files up into more and more MetaData Keys, the chances of the file's uniqueness will keep going up with more divisions, while the number of Metadata keys added to our final file output would create a larger and larger file.

It my goal to be able to make this work with 500k files up to 5 TB files using this system if it can be made to work.

Like I said before. This changes nothing. there are several factors that condemn this to fail. Even if you could solve the uniqueness problem, which you can't, there are still several factors. your "chunks" are good if you want to shorten the jumptime; you need a chunk everytime you reach a certain value, say 33 for the heck of it. so pretty much every 33th bit you need to write the corresponding chunk in your file. the higher the value the longer you have to search through pi.
So, now here is the problem: you want to continously iterate through pi, which is fine by itself - BUT you have to store all that in a memory. you could use certain formulas which enable you to jump to a specific place in pi to shorten the amount of time you need and memory used, but we are talking about a supercomputer here to effectively iterate trough pi, get the corresponding values, write them in a file, remember where you where/use a formula to get there again, do that over and over again.

Don't misunderstand me - You had a really nice idea. But you are not the first to try that. The main factor of a Decoder/Encoder is the speed. You don't have any speed.

Well, if you mean the time needed to load 2GB of the Pi Index into memory, I assume that would take 1 minute or so to, upon starting the software, generate the Pi Index (of that size) needed.  Who cares if it's in memory?  I don't.  Besides, who knows if I will end up using 2GB, it could end up being 500MB of Pi, with smaller chunks.  First we need to just get a basic 2 meg version running, and try to encode files between 500K and 1 MB and just see what happens.  

But if I get this working, imagine the time saved over the internet to send your friends or family the 20GB of videos taken on your vacation.  You would be sending a small file 4-64k large, in their email in moments.  Then they'd decompress out the videos overnight, while sleeping perhaps.  Wake up, the videos are there.  The internet did not need to be congested with all of those 0s and 1s.  And if a lot of people were using it, the internet would work more and more smoothly all the time.  Think of the difference!

Another thing is, you still can't convince me that just because it's possible to have 2^N files TO encode, that there are that many unique files.  For all we know, our research into this could reveal a fundamental characteristic of Nature to only allow organized patterns to assemble in random data at a given ratio, like the Golden Means (8/7 is it)?  Just trying to solve this problem itself could lead to a breakthrough.  What if there are only (2^N)/10 file in existence of each size, and they already happen to be written in Nature?  It would mean all of our ideas already exist in the Global Intelligence of Nature and that our brains are receiving the information via DNA alterations that come from eating plants.  Because science just recently confirmed that by eating some plants, our DNA is altered, and they hypothesize that new ideas come from this phenomenon.  If Nature is the one providing original ideas, it stands to reason the answer is already in Nature for every created thing.

I don't wish to go too deepy into philosophy here, though.  Just saying, you don't know for sure that even though there are 2^N files possible, that all of those combinations will ever be used to organize intelligent data into a file that ends up on someone's computer that could be put through my system.  In that case, there might be all the uniqueness I'll ever need.  Won't know until we try it and watch it either (Get busted) or work as conceived.  

I give up. You are just too dumb to understand what people are trying to tell you. What you wrote up there is utter bullshit, in every way. there are pretty much uncountable unique files out there. Which still has nothing to do with it - at all! You don't even seem to understand that each chunk you want to save has a specific value itself, and you need a lot of chunks. Your idea of a pi string is nice, you don't have to compute it if you have it already stored somewhere. you still have to iterate through it. A LOT!


Since pi is an INFINITE number all of the data in the world is stored somewhere in there. Along with the cure for every illness, every word ever said, every formula ever created. so yeah, there is pretty much everything written in nature. That is nothing new.

My whole post was never about the number of files but about the fucking length of a single 5TB file and the amount of storage you need for the chunks, the memory and the freaking amount of operations you would need for a single file to be processed that way.

But hey! With a quantum computer your idea would be brilliant! But then again,you could just index the complete file in pi by then, since it would be there instantly.
Kazimir
Legendary
*
Offline Offline

Activity: 1176
Merit: 1011



View Profile
September 11, 2013, 03:07:19 PM
 #199

Since pi is an INFINITE number all of the data in the world is stored somewhere in there.
This isn't even necessarily true.

The number 0.121221222122221222221etc (that is, its decimals consist of infinitely increasing sequences of 2s intersected by single 1s) is also "infinite", and irrational, yet it doesn't contain "3" anywhere.




In theory, there's no difference between theory and practice. In practice, there is.
Insert coin(s): 1KazimirL9MNcnFnoosGrEkmMsbYLxPPob
Mota
Legendary
*
Offline Offline

Activity: 804
Merit: 1002


View Profile
September 11, 2013, 03:09:04 PM
 #200

Since pi is an INFINITE number all of the data in the world is stored somewhere in there.
This isn't even necessarily true.

The number 0.1212212221222212222221etc (that is, its decimals consist of infinitely increasing sequences of 2s intersected by single 1s) is also "infinite", and irrational, yet it doesn't contain "3" anywhere.




<.< I did not state that this is the case with every irrational number now, did I?
Pages: « 1 2 3 4 5 6 7 8 9 [10] 11 12 13 14 15 16 17 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!