SamusNi (OP)
Newbie
Offline
Activity: 32
Merit: 0
|
|
January 16, 2016, 01:47:23 PM |
|
instead of all of this discussions to increase the block size, why don't we just compress the blocks, leaving the size as it is?
|
|
|
|
CIYAM
Legendary
Offline
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
|
|
January 16, 2016, 01:49:58 PM |
|
instead of all of this discussions to increase the block size, why don't we just compress the blocks, leaving the size as it is?
Blocks consist of transactions that for the most part are effectively random numbers (such as hashes, public keys and signatures) so they simply won't compress much at all (as you can't in any sensibly usable way compress random information). The efforts that are going on behind the scenes will make a much bigger difference than any tiny percent you could compress the content of a block.
|
|
|
|
SamusNi (OP)
Newbie
Offline
Activity: 32
Merit: 0
|
|
January 16, 2016, 01:59:15 PM |
|
instead of all of this discussions to increase the block size, why don't we just compress the blocks, leaving the size as it is?
Blocks consist of transactions that for the most part are effectively random numbers (such as hashes, public keys and signatures) so they simply won't compress much at all (as you can't in any sensibly usable way compress random information). The efforts that are going on behind the scenes will make a much bigger difference than any tiny percent you could compress the content of a block. Are you sure about that? Wouldn't something like gzip applied to the blocks reduce their size by like 99%? Which efforts are going on behind the scenes exactly?
|
|
|
|
CIYAM
Legendary
Offline
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
|
|
January 16, 2016, 02:00:56 PM |
|
Are you sure about that? Wouldn't something like gzip applied to the blocks reduce their size by like 99%?
Yes - gzip won't do much at all - why don't you try it and see for yourself? Which efforts are going on behind the scenes exactly?
Things like SegWit (you could search to find out if you're interested although I suspect you're just trying to age your account in order to qualify for an ad sig).
|
|
|
|
smith coins
|
|
January 16, 2016, 02:04:29 PM |
|
instead of all of this discussions to increase the block size, why don't we just compress the blocks, leaving the size as it is?
Blocks consist of transactions that for the most part are effectively random numbers (such as hashes, public keys and signatures) so they simply won't compress much at all (as you can't in any sensibly usable way compress random information). The efforts that are going on behind the scenes will make a much bigger difference than any tiny percent you could compress the content of a block. Are you sure about that? Wouldn't something like gzip applied to the blocks reduce their size by like 99%? Which efforts are going on behind the scenes exactly? I don't know how the network will work after compressing but 7zip is more efficient than gzip
|
|
|
|
SamusNi (OP)
Newbie
Offline
Activity: 32
Merit: 0
|
|
January 16, 2016, 02:11:41 PM |
|
Are you sure about that? Wouldn't something like gzip applied to the blocks reduce their size by like 99%?
Yes - gzip won't do much at all - why don't you try it and see for yourself? Which efforts are going on behind the scenes exactly?
Things like SegWit (you could search to find out if you're interested although I suspect you're just trying to age your account in order to qualify for an ad sig). Yes I was doing some checks, only about 25% gain with compressions like gzip, 7zip... so doesn't help much. Thanks for the info. Looking forward for that SegWit. Any plans for a release date on that?
|
|
|
|
Lauda
Legendary
Offline
Activity: 2674
Merit: 2965
Terminated.
|
|
January 16, 2016, 02:13:05 PM |
|
This is a nice example of why people with IT degree need to decide on the technicalities (not trying to be offensive). Compressing random data usually results in 0% saved space or the compressed file ends up actually being bigger than the original.
|
"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks" 😼 Bitcoin Core ( onion)
|
|
|
CIYAM
Legendary
Offline
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
|
|
January 16, 2016, 02:15:21 PM |
|
Compressing random data usually results in 0% saved space or the compressed file ends up actually being bigger than the original.
It's actually a very simple (and I think probably standard) way to check whether an encryption algorithm is badly flawed (i.e. if the encrypted data can be shrunk by any standard algo like gzip then it obviously hasn't been encrypted properly).
|
|
|
|
AliceWonderMiscreations
|
|
January 16, 2016, 02:17:19 PM |
|
instead of all of this discussions to increase the block size, why don't we just compress the blocks, leaving the size as it is?
Blocks consist of transactions that for the most part are effectively random numbers (such as hashes, public keys and signatures) so they simply won't compress much at all (as you can't in any sensibly usable way compress random information). The efforts that are going on behind the scenes will make a much bigger difference than any tiny percent you could compress the content of a block. Are you sure about that? Wouldn't something like gzip applied to the blocks reduce their size by like 99%? Which efforts are going on behind the scenes exactly? I don't know how the network will work after compressing but 7zip is more efficient than gzip 7zip is rarely found on *nix systems. Available, sure, but rarely found. bzip2 or xz is more common better alternatives. moot anyway though since compression isn't the issue.
|
I hereby reserve the right to sometimes be wrong
|
|
|
AlexGR
Legendary
Offline
Activity: 1708
Merit: 1049
|
|
January 16, 2016, 02:19:59 PM |
|
This is a nice example of why people with IT degree need to decide on the technicalities (not trying to be offensive). Compressing random data usually results in 0% saved space or the compressed file ends up actually being bigger than the original.
We just need a revolutionary new compression method. Something like that: http://www.theserverside.com/feature/Has-a-New-York-startup-achieved-a-99-compression-rateFrom presentations I've seen, this is not only for video. It was thought that it would be marketed best for video because video takes up most internet bandwidth nowadays. Perhaps people with neural networks will start competing on finding increasingly more efficient compression, compared to what we have now.
|
|
|
|
AliceWonderMiscreations
|
|
January 16, 2016, 02:27:56 PM |
|
This is a nice example of why people with IT degree need to decide on the technicalities (not trying to be offensive). Compressing random data usually results in 0% saved space or the compressed file ends up actually being bigger than the original.
We just need a revolutionary new compression method. Something like that: http://www.theserverside.com/feature/Has-a-New-York-startup-achieved-a-99-compression-rateFrom presentations I've seen, this is not only for video. It was thought that it would be marketed best for video because video takes up most internet bandwidth nowadays. Perhaps people with neural networks will start competing on finding increasingly more efficient compression, compared to what we have now. Video has a lot of predictable redundant data. Random data is, well, random. And video compression is typically lossy. Lossy would never work for the blockchain.
|
I hereby reserve the right to sometimes be wrong
|
|
|
AlexGR
Legendary
Offline
Activity: 1708
Merit: 1049
|
|
January 16, 2016, 02:33:05 PM |
|
This is a nice example of why people with IT degree need to decide on the technicalities (not trying to be offensive). Compressing random data usually results in 0% saved space or the compressed file ends up actually being bigger than the original.
We just need a revolutionary new compression method. Something like that: http://www.theserverside.com/feature/Has-a-New-York-startup-achieved-a-99-compression-rateFrom presentations I've seen, this is not only for video. It was thought that it would be marketed best for video because video takes up most internet bandwidth nowadays. Perhaps people with neural networks will start competing on finding increasingly more efficient compression, compared to what we have now. Video has a lot of predictable redundant data. Random data is, well, random. And video compression is typically lossy. Lossy would never work for the blockchain. This is not about video per se. It's about an algorithm that a neural network discovered, which could compress a lot of data with very high percentage ratio. Video is just one of the deployment markets because it takes up >50% of internet bandwidth, so, naturally, they went after it. But, from what I saw in one of the presentations, it's more like data agnostic. Plus, in that example, the original file is an MP4, which has already reduced redundant data (and has loss of quality).
|
|
|
|
CIYAM
Legendary
Offline
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
|
|
January 16, 2016, 02:38:06 PM |
|
This is not about video per se. It's about an algorithm that a neural network discovered, which could compress a lot of data with very high percentage ratio.
The fact that they publish nothing about this supposed algorithm suggests that it is in fact a hoax rather than some revolutionary new thing. It's strange that people will just accept "we can't publish stuff because of X" when in fact they could publish the specific algorithm used (for the supposed video mentioned) without giving away how that algorithm was created (as supposedly the algorithm was simply one of an infinite number that this amazing AI could create).
|
|
|
|
fairglu
Legendary
Offline
Activity: 1100
Merit: 1032
|
|
January 16, 2016, 02:41:20 PM |
|
This is a nice example of why people with IT degree need to decide on the technicalities (not trying to be offensive). Compressing random data usually results in 0% saved space or the compressed file ends up actually being bigger than the original.
We just need a revolutionary new compression method. Something like that: http://www.theserverside.com/feature/Has-a-New-York-startup-achieved-a-99-compression-rateFrom presentations I've seen, this is not only for video. It was thought that it would be marketed best for video because video takes up most internet bandwidth nowadays. Perhaps people with neural networks will start competing on finding increasingly more efficient compression, compared to what we have now. Video has a lot of predictable redundant data. Random data is, well, random. And video compression is typically lossy. Lossy would never work for the blockchain. This is not about video per se. It's about an algorithm that a neural network discovered, which could compress a lot of data with very high percentage ratio. Video is just one of the deployment markets because it takes up >50% of internet bandwidth, so, naturally, they went after it. But, from what I saw in one of the presentations, it's more like data agnostic. Plus, in that example, the original file is an MP4, which has already reduced redundant data (and has loss of quality). MP4 still has a lot of redundant data, it only looks for local changes over a few frames. That said, that article triggered a few of my "snake oil salesman" sensors
|
|
|
|
erre
Legendary
Offline
Activity: 1680
Merit: 1205
|
|
January 16, 2016, 02:48:01 PM |
|
I'm not very tech-savy, but I think I can understand the problem: how can you make a summary of a book made up of random words? The problem is not easy, but I was thinking about that: what if we substitute the most repeating series of numbers with a symbol? I.e. you could replace the 00000 (if it happens many times in the block) with @, 037753198 (if it happens more than N times) with ₩ and so on.....
Could it work?
|
|
|
|
AliceWonderMiscreations
|
|
January 16, 2016, 02:48:28 PM |
|
A couple guys I use to work with developed something they called the Internet Compression Algorithm. It could compress the entire Internet into a single bit. I bet for enough bitcoin they would share it...
|
I hereby reserve the right to sometimes be wrong
|
|
|
AlexGR
Legendary
Offline
Activity: 1708
Merit: 1049
|
|
January 16, 2016, 02:50:22 PM |
|
This is not about video per se. It's about an algorithm that a neural network discovered, which could compress a lot of data with very high percentage ratio.
The fact that they publish nothing about this supposed algorithm suggests that it is in fact a hoax rather than some revolutionary new thing. It's strange that people will just accept "we can't publish stuff because of X" when in fact they could publish the specific algorithm used (for the supposed video mentioned) without giving away how that algorithm was created (as supposedly the algorithm was simply one of an infinite number that this amazing AI could create). Some ideas can be so radical that even hinting at the direction of the proposed solution could ignite "lamps" over other people's heads that would try to reproduce the solution. If you ask me "is 99.9% compression feasible" in every data set, I have 100% confidence that it is. I just don't know the method. Theoretically even if you find an algorithm that reduces size by even 1-2% in every possible data set, then you only have to fold the data multiple times and bring them to near zero over a large number of iterations. It would have a cpu tradeoff though.
|
|
|
|
fuathan
|
|
January 16, 2016, 02:51:50 PM |
|
instead of all of this discussions to increase the block size, why don't we just compress the blocks, leaving the size as it is?
Blocks consist of transactions that for the most part are effectively random numbers (such as hashes, public keys and signatures) so they simply won't compress much at all (as you can't in any sensibly usable way compress random information). The efforts that are going on behind the scenes will make a much bigger difference than any tiny percent you could compress the content of a block. Are you sure about that? Wouldn't something like gzip applied to the blocks reduce their size by like 99%? Which efforts are going on behind the scenes exactly? If you want to compress anything in digital you need to find a methodology to zip bytes and then decompress it later with this methodology (or formula. There is no methodology for digital numbers that create a block. If they find a methodology for it they can easily produce fake bitcoins.
|
|
|
|
shorena
Copper Member
Legendary
Offline
Activity: 1498
Merit: 1540
No I dont escrow anymore.
|
|
January 16, 2016, 02:52:08 PM |
|
I'm not very tech-savy, but I think I can understand the problem: how can you make a summary of a book made up of random words? The problem is not easy, but I was thinking about that: what if we substitute the most repeating series of numbers with a symbol? I.e. you could replace the 00000 (if it happens many times in the block) with @, 037753198 (if it happens more than N times) with ₩ and so on.....
Could it work?
No, because there is no "most repeating series" in random numbers. They are all equal. If you create a huffman code[1] for random data all code words have the same length, because every symbol has the same probability to appear. Compression has fundamental limits if you do not want lose data. Think about it like this. You have 4 things you want to represent, what is the smallest amount of data that you can use for that? The answer is 4 bits. 00 - thing #1 01 - thing #2 10 - thing #3 11 - thing #4 Now if someone comes along and claims they can compress 6 things into 4 bits, they either lose data (2/6) or are full of shit[2]. [1] https://en.wikipedia.org/wiki/Huffman_coding[2] https://bitcointalk.org/index.php?topic=1330113.msg13573352#msg13573352
|
Im not really here, its just your imagination.
|
|
|
CIYAM
Legendary
Offline
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
|
|
January 16, 2016, 02:52:29 PM |
|
If you ask me "is 99.9% compression feasible" in every data set, I have 100% confidence that it is. I just don't know the method.
Then unfortunately the only thing to say about that is that you probably shouldn't repeat that (and oops - I just quoted you so now you can't erase it - damn that stupid internet never forgets thing). And while your at it I guess you would also believe 100% in this: https://en.wikipedia.org/wiki/Russell's_teapot
|
|
|
|
|