AlexGR
Legendary
Offline
Activity: 1708
Merit: 1049
|
|
January 16, 2016, 02:58:02 PM |
|
I'm not very tech-savy, but I think I can understand the problem: how can you make a summary of a book made up of random words? The problem is not easy, but I was thinking about that: what if we substitute the most repeating series of numbers with a symbol? I.e. you could replace the 00000 (if it happens many times in the block) with @, 037753198 (if it happens more than N times) with ₩ and so on.....
Could it work?
It definitely would. However the symbol would end up being represented by binary data. So if you have, say, one byte of storage, that's 255 symbols and a no-symbol, which are then represented by 8 binary digits. (0 or 1 from 00000000 to 11111111) If we had a hardware chip that had a table of, say, 100.000 "symbols" that didn't operate in binary, it would work. But it would have to "cooperate" with the rest of the system that would then need "translators". I've read something similar here: http://www.endlesscompression.com/"In the Dutch book "De broncode"(*) Jan Sloot talks about an other way of thinking, something what work at hardware level, what's an other way of coding he also named it "seven" translated from Dutch it can also mean "to sieve" or "to filter". He didn't use zeros and one's any more because that was two dimensional he explains that there are three dimensions."
|
|
|
|
CIYAM
Legendary
Offline
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
|
|
January 16, 2016, 02:59:54 PM |
|
Hmm... I think this silly topic has now started to go too far even for entertainment purposes.
|
|
|
|
AlexGR
Legendary
Offline
Activity: 1708
Merit: 1049
|
|
January 16, 2016, 03:05:55 PM |
|
If you ask me "is 99.9% compression feasible" in every data set, I have 100% confidence that it is. I just don't know the method.
Then unfortunately the only thing to say about that is that you probably shouldn't repeat that (and oops - I just quoted you so now you can't erase it - damn that stupid internet never forgets thing). There is a very extensive list about things that we "shouldn't" have achieved, and experts were sure of it, yet we did. As I said, if you find something that can shave off even 1 or 0.1% per iteration, it's just a matter of how many times you fold the data from that point onward. Saying you'll get 99.9% sounds much different than saying I can get 0.1%, doesn't it? Yet if you can get 0.1% consistently, in every data set (obviously that includes compressed ones), then over several hundreds / thousand iterations you'll end up with a fraction of the original size.
|
|
|
|
shorena
Copper Member
Legendary
Offline
Activity: 1498
Merit: 1528
No I dont escrow anymore.
|
|
January 16, 2016, 03:07:28 PM |
|
Hmm... I think this silly topic has now started to go too far even for entertainment purposes.
Arent you excited? 3D-bits could be a revolution. Enless compression, all the data on a single bit, imagine the things we could do. Footnote: There might be some hidden information in this message.Its sarcasm.If you ask me "is 99.9% compression feasible" in every data set, I have 100% confidence that it is. I just don't know the method.
Then unfortunately the only thing to say about that is that you probably shouldn't repeat that (and oops - I just quoted you so now you can't erase it - damn that stupid internet never forgets thing). There is a very extensive list about things that we "shouldn't" have achieved, and experts were sure of it, yet we did. As I said, if you find something that can shave off even 1 or 0.1% per iteration, it's just a number of how many times you fold the data from that point onward. Saying you'll get 99.9% sounds much different than saying I can get 0.1%, doesn't it? Yet if you can get 0.1% consistently, in every data set (obviously that includes compressed ones), then over several hundreds / thousand iterations you'll end up with a fraction of the original size. You dont get it do you? If it would be possible. I could also store your entire being in a single bit. How can a single bit represent you? Are you 0 or 1?
|
Im not really here, its just your imagination.
|
|
|
CIYAM
Legendary
Offline
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
|
|
January 16, 2016, 03:08:12 PM |
|
Saying you'll get 99.9% sounds much different than saying I can get 0.1%, doesn't it? Yet if you can get 0.1% consistently, in every data set (obviously that includes compressed ones), then over several hundreds / thousand iterations you'll end up with a fraction of the original size.
You cannot get 0.1% consistently - if you are given a random set of values with no repeated ones then how on earth did you compress it by any percentage at all? You are either deluded or wanting to scam but in either case your statements are not even close to rational.
|
|
|
|
AlexGR
Legendary
Offline
Activity: 1708
Merit: 1049
|
|
January 16, 2016, 03:15:21 PM |
|
Saying you'll get 99.9% sounds much different than saying I can get 0.1%, doesn't it? Yet if you can get 0.1% consistently, in every data set (obviously that includes compressed ones), then over several hundreds / thousand iterations you'll end up with a fraction of the original size.
You cannot get 0.1% consistently - if you are given a random set of values with no repeat then how on earth did you compress it by any percentage at all? You are either deluded or wanting to scam but in either case your statements are not even close to rational. You don't need to be 100% static in your approach like I proposed upthread. You could alternate compression technique in each compression iteration. For example you could be using a classic compression in odd iterations and a pre-shared table in even iterations. This way, each step would be easier to compress as a pre-shared table would be like starting compression from scratch / the classic one would start with a fresh data set. Look, I'm not developing 99% compression algorithms but I'm fairly confident it can be done. And it will be done (if it hasn't been already).
|
|
|
|
CIYAM
Legendary
Offline
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
|
|
January 16, 2016, 03:19:26 PM |
|
You don't need to be 100% static in your approach like I proposed upthread. You could alternate compression technique in each compression iteration.
You are spouting nonsense - why? My guess is that you are trying to scam people because there is no logical reason to spout such nonsense otherwise. I think I would be warning others to be very careful taking anything this forum member has to post with more than a grain of salt.
|
|
|
|
AlexGR
Legendary
Offline
Activity: 1708
Merit: 1049
|
|
January 16, 2016, 03:20:14 PM |
|
You dont get it do you? If it would be possible. I could also store your entire being in a single bit. How can a single bit represent you? Are you 0 or 1?
It wouldn't work that way. Say you have compression savings of 1%. When you compress down to 100 bits you then go to 99 (-1%). Then what? You would need 98.01 bits storage, so there is your limit. 98.01 = 99 = you can't go further.
|
|
|
|
CIYAM
Legendary
Offline
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
|
|
January 16, 2016, 03:23:17 PM |
|
BTW - people were trying this "ultimate compression" scam back in the 1990's (on usenet) so your idea is really not much newer than the Nigerian Prince one. I suggest you try harder next time.
|
|
|
|
shorena
Copper Member
Legendary
Offline
Activity: 1498
Merit: 1528
No I dont escrow anymore.
|
|
January 16, 2016, 03:33:37 PM |
|
You dont get it do you? If it would be possible. I could also store your entire being in a single bit. How can a single bit represent you? Are you 0 or 1?
It wouldn't work that way. Say you have compression savings of 1%. When you compress down to 100 bits you then go to 99 (-1%). Then what? You would need 98.01 bits storage, so there is your limit. 98.01 = 99 = you can't go further. You failed to get the point, I will assume you are represented by a 0.
|
Im not really here, its just your imagination.
|
|
|
AlexGR
Legendary
Offline
Activity: 1708
Merit: 1049
|
|
January 16, 2016, 03:35:08 PM |
|
You dont get it do you? If it would be possible. I could also store your entire being in a single bit. How can a single bit represent you? Are you 0 or 1?
It wouldn't work that way. Say you have compression savings of 1%. When you compress down to 100 bits you then go to 99 (-1%). Then what? You would need 98.01 bits storage, so there is your limit. 98.01 = 99 = you can't go further. You failed to get the point, I will assume you are represented by a 0. I thought you were trying to prove the impossibility of multiple-iteration compression by creating a single-bit paradox
|
|
|
|
shorena
Copper Member
Legendary
Offline
Activity: 1498
Merit: 1528
No I dont escrow anymore.
|
|
January 16, 2016, 05:07:06 PM |
|
You dont get it do you? If it would be possible. I could also store your entire being in a single bit. How can a single bit represent you? Are you 0 or 1?
It wouldn't work that way. Say you have compression savings of 1%. When you compress down to 100 bits you then go to 99 (-1%). Then what? You would need 98.01 bits storage, so there is your limit. 98.01 = 99 = you can't go further. You failed to get the point, I will assume you are represented by a 0. I thought you were trying to prove the impossibility of multiple-iteration compression by creating a single-bit paradox The single bit is not the important part. The important part is that you cant represent 2 n+1 different things with n bits. Well you can, but you will have at least 1 collision and thus lose information. You can change the way you store the information however you like. All compression does is remove redundancy and usually our encoding schemes are pretty redundant. E.g. english texts have a high amount of e's thus a perfect encoding would use a short code[1] for e and a longer one for a symbol that is used less often, like q. Encrypted data however is different, because every symbol is equally likely[2], thus there are no general shortcuts for encrypted data. All you will get is that you need more data to store your additional encoding information, be that in the datachunk itself or in a predistributed table. [1] short and long in terms of number of bits used. [2] If this would not be a property of encrypted data it would be vulnerable to a frequency analysis.
|
Im not really here, its just your imagination.
|
|
|
sAt0sHiFanClub
|
|
January 16, 2016, 05:14:50 PM |
|
And video compression is typically lossy. Lossy would never work for the blockchain.
Lossy would better describe MtGox and Cryptsy, amirite?
|
We must make money worse as a commodity if we wish to make it better as a medium of exchange
|
|
|
|