Bitcoin Forum
December 16, 2017, 04:05:34 AM
 News: Latest stable version of Bitcoin Core: 0.15.1  [Torrent].
 Home Help Search Donate Login Register
 Pages: [1]
 Author Topic: Sharing some personnal research and solution about scalling bitcoin  (Read 333 times)
Newbie

Offline

Activity: 7

 December 06, 2017, 12:15:48 PM

I was thinking if there is any other way to scale bitcoin on-chain. I did some research, And i want to share it here, This is my first post, I will try my best to explain, and sorry for my english (im moroccan, english is not my native language)

One thing always came in my mind is a block are like a bus who pass every 10min. In real life the bus have theoric limited amount of seats (the 1mb limit analogy), but it possible to add more people in the bus with a little of additional work. or "compress" work

Like this image
http://www.ethiopianreview.com/photos/obama_family_kenya.jpg

And in physics its possible to compress anything on a small limited volume, but that "compress" need always addition work according to the initial volume. (Black holes for example)

So my theory is if it is possibile to find an algorythm that do the same thing but with data and information. Giving an arbitrary large file, is it possible mathematically to compress it to a limit less 1Mb.

Here, I begin to research in mathematic and factorisations of numbers. I found the problem in factorisation is constructing the set of prime numbers is computationaly infeasable. So i go the opposite of prime numbers, which are Superior highly composite number.

Then I found there is a formula to construct all Superior highly composite numbers.

https://wikimedia.org/api/rest_v1/media/math/render/svg/fea86cc31d3cb47548322d753b565ab7c068a3f5
with
https://wikimedia.org/api/rest_v1/media/math/render/svg/d05b57d2a54edd20b348a57021b58c887ef6732d

The question is, like factorise a numbers in product of prime numbers, is it possible to factorise a number to a sum of Superior highly composite number.

In short, If this proposition is true :
"Every positive integer is the sum of Superior Highly Composite Number"

If it does, its possible to transform an arbitrary large number to sum of SHCN (Superior highly composite number), then use the construct formula to assign every SHCN in the list to a real number that was calculate from a prime number. Then assign every prime number in the list to the order of the prime number used to make the real number.

So the arbitrary large number is compressed to a permutable set/list of small order numbers.

For example, we have this list of prime numbers order we want to decompress

1,3,6 (1st prime, 3rd prime, 6th prime)

we transform it to primes

2,5,13

then we transform it to real number x the solution of 1/(p^(1/x)-1) = 1, which is log(p)/log(2)

1,2.32192809489,3.70043971814

Then use the constructor to calculate the SHCN of each real number. Then sum them to get the original number. (Sorry, I dont know to how to do this manually)

This is how decompression is done with SHCN

Now for compression for a file.

We need to search for the biggest SHCN less than the file. and memorise the prime p in the SHCN which 1/(p^(1/x)-1) is the closest but not less than 1, and get his order n prime set.

file = SHCN(real(prime(n_0))) + file rest

then we do the same for "file rest".

file = SHCN(real(prime(n_0))) + SHCN(real(prime(n_1))) + file rest

until we get this.

file = SHCN(real(prime(n_0))) + ... + SHCN(real(prime(n_m))) + 0 (or 1)

And the compressed file is the list of m numbers

n_0,n_1,...,n_m;0 (or 1)

And this compression is very dynamic in theory. For example if we want to compress a 1.1Kb to be only less than 1Kb, we just stop when the

file = SHCN(real(prime(n_0))) + ... + SHCN(real(prime(n_k))) + file rest

is less than 1Kb, and the compressed file will be:

n_0,n_1,...,n_k;file rest

in decompression the program will just decompress n_0,n_1,...,n_k then add file rest at the end to get the original file

If this compression is mathematically provable and computationaly feasable, it can be used to compress arbitrary big number of transactions to be less than 1Mb by adding "compression" work into mining in addition to the proof of work. And make the hash more elementary, by just permuting numbers on the list of the compressed block and hash it.

And if the miners want to profit more from fees they need to perform more in "compression" work by compressing more transaction to the block. And maybe with the "compression" work added, will make the ASIC mining more difficult, so the mining will be more decentralised.

But all this is just a theory, I didnt test anything from above. That's why i want to share it here to know you thoughts about this type of compression and collaborate. It's theoricaly possible to lossless compress all sort of data, but my purpose is to use it in scalling bitcoin.

PS: I just made a mistake, the real number x from the SHCN is a solution of a 1/(p^(1/x)-1) = k, where k is the closest integer to x when searching of the SHCN. so x is log(p)/log(1+k)

so the compressed file will be in this form

(n_0,k_0),(n_1,k_1),...,(n_m,k_m);file rest

it remind me the product factorisation of p0^k0*p1^k1*...*pm^km, but in summation, its a good sign that summation factorisation can be in bijection with product factorisation.
1513397134
Hero Member

Offline

Posts: 1513397134

Ignore
 1513397134

1513397134
 Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1513397134
Hero Member

Offline

Posts: 1513397134

Ignore
 1513397134

1513397134
 Report to moderator
1513397134
Hero Member

Offline

Posts: 1513397134

Ignore
 1513397134

1513397134
 Report to moderator
1513397134
Hero Member

Offline

Posts: 1513397134

Ignore
 1513397134

1513397134
 Report to moderator
hasmukh_rawal
Full Member

Offline

Activity: 168

 December 06, 2017, 12:46:16 PM

I am not much into mathematics since I was dumb in the subject from my school days but I do know quite a few things in the digital world. I have studied computer science all my life and know there are ways to solve any problem with the right mindset and intelligence. I have studied that data can be compressed without losing any information.
So I think it is possible to compress raw data(blocks from the transaction) and use it to free up some space in the blocks. There is a possibility of a lossless compression by modifying the algorithm but then it will take many experts in coding to find a feasible way to compress the raw data and save up some space to add more data into the block. The question is, if it is possible why hasn't anybody tried it yet ?

BTC-GREEN              Ecological Community in the Grean Planet

❱❱❱❱❱❱             ❰❰❰❰❰❰
Newbie

Offline

Activity: 7

 December 06, 2017, 01:39:14 PM

I am not much into mathematics since I was dumb in the subject from my school days but I do know quite a few things in the digital world. I have studied computer science all my life and know there are ways to solve any problem with the right mindset and intelligence. I have studied that data can be compressed without losing any information.
So I think it is possible to compress raw data(blocks from the transaction) and use it to free up some space in the blocks. There is a possibility of a lossless compression by modifying the algorithm but then it will take many experts in coding to find a feasible way to compress the raw data and save up some space to add more data into the block. The question is, if it is possible why hasn't anybody tried it yet ?

What i know in lossless compression, they use dictionnaries. like zip and winrar. In short actual compression are just backed by custom dictionnaries. the dictionnary are just made from data analytic and statistic. There is no mathematic in it.

In math, every thing can be compressed and simplified and the disctionnary is prime numbers. the biggest numbers can writen in simple formula. And I think this can be done also in raw data, if we see it as a big numbers.

But I same as you i also ask my self the same question, if it is possible why hasn't anybody tried it yet ? maybe it already exist and they dont want to publish, because they want to sell us big data for big money. or maybe just no one have tried it yet. and the first people who discover something always ask the same question. so its normal.
Newbie

Offline

Activity: 7

 December 06, 2017, 01:50:52 PM

then we transform it to real number x the solution of 1/(p^(1/x)-1) = 1, which is log(p)/log(2)

I just made a mistake, the real number x from the SHCN is a solution of a 1/(p^(1/x)-1) = k, where k is the closest integer to x when searching of the SHCN. so x is log(p)/log(1+k)

so the compressed file will be in this form

(n_0,k_0),(n_1,k_1),...,(n_m,k_m);file rest

it remind me the product factorisation of p0^k0*p1^k1*...*pm^km, but in summation, its a good sign that summation factorisation can be in bijection with product factorisation.
nullius
Copper Member
Newbie

Offline

Activity: 28

@nym.zone

 December 06, 2017, 03:56:45 PM

One thing always came in my mind is a block are like a bus who pass every 10min. In real life the bus have theoric limited amount of seats (the 1mb limit analogy), but it possible to add more people in the bus with a little of additional work. or "compress" work

[...]

And in physics its possible to compress anything on a small limited volume, but that "compress" need always addition work according to the initial volume. (Black holes for example)

So my theory is if it is possibile to find an algorythm that do the same thing but with data and information. Giving an arbitrary large file, is it possible mathematically to compress it to a limit less 1Mb.

What an excellent idea!!  May I ask a humble question, maybe to improve your genius.  Why not feed the output of the compression program back into the compression program recursively?  You could compress the whole blockchain to be printed in a QR code for backup!  Or even the whole Internet!

Possible prior art:  WEB compressor, U.S. Patent 5,533,051, U.S. Patent 5,488,364, etc.  Tell me, is your method patented??

(Forum, please forgive me.  I never had the pleasure of suffering these in comp.compression.)

Bitcoin address, Segwit P2WPKH-in-P2SH: 36finjay27E5XPDtSdLEsPR1RypfhNW8D8 (Tips welcome.)
PGP Ed25519: 0xC2E91CD74A4C57A105F6C21B5A00591B2F307E0C (preferred) — RSA: 0xA232750664CC39D61CE5D61536EBB4AB699A10EE
‘If you’re not doing anything wrong, you have nothing to hide.’  No!  Because I do nothing wrong, I have nothing to show.” — nullius@nym.zone
Newbie

Offline

Activity: 7

 December 06, 2017, 04:23:23 PM

What an excellent idea!!  May I ask a humble question, maybe to improve your genius.  Why not feed the output of the compression program back into the compression program recursively?  You could compress the whole blockchain to be printed in a QR code for backup!  Or even the whole Internet!

Possible prior art:  WEB compressor, U.S. Patent 5,533,051, U.S. Patent 5,488,364, etc.  Tell me, is your method patented??

(Forum, please forgive me.  I never had the pleasure of suffering these in comp.compression.)

Yeah I think its possible, by compressing the compression. Until making the whole blockchain or internet in a QR Core. And if you want to decompress you need let it grow in the program like a growing tree from a small seed. And this if its the compression work.

My method is just a theory written in some pieces of paper, I didnt made any program yet to test to prove if it work.

No I havent patented yet. I have no idea how to proceed for a Moroccan citizen if you have any experiance in patenting paper work.

To be honest I dont care if it patented or not. The important thing for me is make it work and open to everyone. I dont care who will get the credit at the end.

I think the first important step for this method, is to prove the mathematic behind it, the proposition of "Every positif integer is a sum of Superior Highly Composite Number", then prove the summation factorisation algorithm is computationaly feasable for large numbers. And at the end make some opensource program to test it and make it real.

EDIT:
Just forget the constraint of dictionary of prime numbers.

Compressing a file, it need a dictionnary of prime numbers. the same for decompressing. so the compression of the blockchain or internet need a large dictionnary of prime numbers in storage so i think its not feasable, maybe if a new formula is discovered to construct prime numbers, like for SHCN in the futur who knows.
mda
Member

Online

Activity: 86

 December 06, 2017, 05:02:52 PM

In short, If this proposition is true :
"Every positive integer is the sum of Superior Highly Composite Number"
Quote
The first 15 superior highly composite numbers, 2, 6, 12, 60, 120, 360, 2520, 5040, 55440, 720720, 1441440, 4324320, 21621600, 367567200, 6983776800

1 is a positive integer so 'Quote from: moukafih.adil' isn't true, sorry.
nullius
Copper Member
Newbie

Offline

Activity: 28

@nym.zone

 December 06, 2017, 05:26:13 PM

What an excellent idea!!  May I ask a humble question, maybe to improve your genius.  Why not feed the output of the compression program back into the compression program recursively?  You could compress the whole blockchain to be printed in a QR code for backup!  Or even the whole Internet!

Possible prior art:  WEB compressor, U.S. Patent 5,533,051, U.S. Patent 5,488,364, etc.  Tell me, is your method patented??

(Forum, please forgive me.  I never had the pleasure of suffering these in comp.compression.)

Yeah I think its possible, by compressing the compression. Until making the whole blockchain or internet in a QR Core. And if you want to decompress you need let it grow in the program like a growing tree from a small seed. And this if its the compression work.

My method is just a theory written in some pieces of paper, I didnt made any program yet to test to prove if it work.

No I havent patented yet. I have no idea how to proceed for a Moroccan citizen if you have any experiance in patenting paper work.

To be honest I dont care if it patented or not. The important thing for me is make it work and open to everyone. I dont care who will get the credit at the end.

I think the first important step for this method, is to prove the mathematic behind it, the proposition of "Every positif integer is a sum of Superior Highly Composite Number", then prove the summation factorisation algorithm is computationaly feasable for large numbers. And at the end make some opensource program to test it and make it real.

EDIT:
Just forget the constraint of dictionary of prime numbers.

Compressing a file, it need a dictionnary of prime numbers. the same for decompressing. so the compression of the blockchain or internet need a large dictionnary of prime numbers in storage so i think its not feasable, maybe if a new formula is discovered to construct prime numbers, like for SHCN in the futur who knows.

I want to frame this and hang it on my wall.  Tell me, have you ever considered designing a motor which drives an electrical generator to power the motor?  Or a train locomotive which holds a giant magnet in front, to pull the whole train forward?  Or a water wheel which powers a pump which moves the water back to the top of the water wheel?  Fascinating ideas.

I will grant that your Bitcoin scaling idea is superior to the scaling ideas behind BCH, BU, 2X, and their ilk.

Now, you go read Section 9 of the comp.compression FAQ.  Please don’t come back unless you’ve read that, else the forum may never forgive me.

Quote from: comp.compression FAQ
It is mathematically impossible to create a program compressing without loss all files by at least one bit (see below and also item 73 in part 2 of this FAQ). Yet from time to time some people claim to have invented a new algorithm for doing so. Such algorithms are claimed to compress random data and to be applicable recursively, that is, applying the compressor to the compressed output of the previous run, possibly multiple times. Fantastic compression ratios of over 100:1 on random data are claimed to be actually obtained.

[...]

9.2 The counting argument

Theorem:
No program can compress without loss all files of size >= N bits, for any given integer N >= 0.

Proof:
Assume that the program can compress without loss all files of size >= N bits.  Compress with this program all the 2^N files which have exactly N bits.  All compressed files have at most N-1 bits, so there are at most (2^N)-1 different compressed files [2^(N-1) files of size N-1, 2^(N-2) of size N-2, and so on, down to 1 file of size 0]. So at least two different input files must compress to the same output file. Hence the compression program cannot be lossless.

The proof is called the "counting argument". It uses the so-called pigeon-hole principle: you can't put 16 pigeons into 15 holes without using one of the holes twice.

The overcrowded pigeons were a hint.  I did not waste my time reading your equations.  What you are trying to do is proved mathematically impossible.

By the way, JPEG is a lossy compression format.  It throws away information.  Your example of a compressed file was a JPEG.  Learn the difference before you try designing a compression algorithm.

Bitcoin address, Segwit P2WPKH-in-P2SH: 36finjay27E5XPDtSdLEsPR1RypfhNW8D8 (Tips welcome.)
PGP Ed25519: 0xC2E91CD74A4C57A105F6C21B5A00591B2F307E0C (preferred) — RSA: 0xA232750664CC39D61CE5D61536EBB4AB699A10EE
‘If you’re not doing anything wrong, you have nothing to hide.’  No!  Because I do nothing wrong, I have nothing to show.” — nullius@nym.zone
Newbie

Offline

Activity: 7

 December 06, 2017, 05:45:25 PM

Quote
The first 15 superior highly composite numbers, 2, 6, 12, 60, 120, 360, 2520, 5040, 55440, 720720, 1441440, 4324320, 21621600, 367567200, 6983776800

1 is a positive integer so 'Quote from: moukafih.adil' isn't true, sorry.

1 = 1
2 = 2
3 = 2 + 1
4 = 2*2
5 = 2*2+1
6 = 6
7 = 6 + 1
8 = 6 + 2
9 = 6 + 2 + 1
10 = 6 + 2*2

I feel like its "Every number is sum of Superior Highly Composite Number and 1", or maybe the simple "Every number is sum of Highly Composite Number", but this need a rigorous proof.

Newbie

Offline

Activity: 7

 December 06, 2017, 06:44:23 PM

I want to frame this and hang it on my wall.  Tell me, have you ever considered designing a motor which drives an electrical generator to power the motor?  Or a train locomotive which holds a giant magnet in front, to pull the whole train forward?  Or a water wheel which powers a pump which moves the water back to the top of the water wheel?  Fascinating ideas.

I will grant that your Bitcoin scaling idea is superior to the scaling ideas behind BCH, BU, 2X, and their ilk.

Now, you go read Section 9 of the comp.compression FAQ.  Please don’t come back unless you’ve read that, else the forum may never forgive me.

Quote from: comp.compression FAQ
It is mathematically impossible to create a program compressing without loss all files by at least one bit (see below and also item 73 in part 2 of this FAQ). Yet from time to time some people claim to have invented a new algorithm for doing so. Such algorithms are claimed to compress random data and to be applicable recursively, that is, applying the compressor to the compressed output of the previous run, possibly multiple times. Fantastic compression ratios of over 100:1 on random data are claimed to be actually obtained.

[...]

9.2 The counting argument

Theorem:
No program can compress without loss all files of size >= N bits, for any given integer N >= 0.

Proof:
Assume that the program can compress without loss all files of size >= N bits.  Compress with this program all the 2^N files which have exactly N bits.  All compressed files have at most N-1 bits, so there are at most (2^N)-1 different compressed files [2^(N-1) files of size N-1, 2^(N-2) of size N-2, and so on, down to 1 file of size 0]. So at least two different input files must compress to the same output file. Hence the compression program cannot be lossless.

The proof is called the "counting argument". It uses the so-called pigeon-hole principle: you can't put 16 pigeons into 15 holes without using one of the holes twice.

The overcrowded pigeons were a hint.  I did not waste my time reading your equations.  What you are trying to do is proved mathematically impossible.

By the way, JPEG is a lossy compression format.  It throws away information.  Your example of a compressed file was a JPEG.  Learn the difference before you try designing a compression algorithm.

Thank you for your help I really appreciated it, And I made sure to read comp.compression FAQ even if it was little too technical for me and my modest english.

The method I give in my post is just idea that I think is useful for the community and future. I didnt made any research in compression theory and sorry for my lack of experience in this field.

Just about the theory, I think that my method cannot compress all files. It's possible to find many output that can be bigger than the input. I dont claim to have found the perfect compression.

But imagine this scenario of compressing the compression. For example I give a file as input, and the output is smaller, (mean it got compressed), then we give the output again as input, and the output got smaller again, then again as input, and the output got smaller the third time, but in the 4th time we give the input, the output got bigger. then here we stop. we took the smallest output. then we can split that output to parts in different size and give them as outputs and searching if some of the outputs got smaller after compression and take the smallest one in recursive compression. then we split those in parts again and do the same thing. until we stop a satisfied size of the file. for example the compressed block is smaller than 1mb. But this need a big amount of compression work by searching of the simplest representation of the file. like searching the best move of chess in a tree.

I dont know if this approach exist like if they have already proven if its possible mathematically or not. Just inform me if someone talk about it.

And also I think its possible to find a file that no matter what you do, the output of the whole or every arbitrary parts of the file is always bigger than the input. So in this case the file is it self the optimaly compressed.

EDIT :
The problem of split an output to parts is it need to store the order of parts, so another information is added. So to avoid that, we have  compressed file is a list of permutable number, so if the compressed cannot be compressed again, we need only permuting numbers orders and try again until it get compressed, until it we get satisfied with the size, a little like what miners do in hashing blocks.
mda
Member

Online

Activity: 86

 December 06, 2017, 09:31:49 PM

There is a gap between 12×2+6×2+2×2+1=41 and 60.
After all product is just a serial addition and it's miner's job to repeat operations.
Newbie

Offline

Activity: 7

 December 06, 2017, 10:29:02 PM

There is a gap between 12×2+6×2+2×2+1=41 and 60.
After all product is just a serial addition and it's miner's job to repeat operations.

Because we know how to construct the set of SHCN, unlike prime numbers that need very high computational power.

42 = 12+12+12+6

in real number from the constructor of SHCN

42 = (2)+(2)+(2)+(1.58496250072)

in order of primes from (n,k) = x(prime(n),k)=ln(k+1)/ln(prime(n))

42 = (1,3)+(1,3)+(1,3)+(1,2)

so the output is the set {(1,3),(1,3),(1,3),(1,2)}

if we want get the original.

we have the real number of (1,3), is x(prime(1),3)=ln(3+1)/ln(prime(1))=ln(4)/ln(2) = 2
the same for (1,2), is x(prime(1),2)=ln(2+1)/ln(prime(1))=ln(3)/ln(2) = 1.58496250072

then

{(1,3),(1,3),(1,3),(1,2)} = (2)+(2)+(2)+(1.58496250072)

and from the constructor of SHCN, we calcul the value of each real number, the SHCN of 2 is 12 and (1.58496250072) is 6

{(1,3),(1,3),(1,3),(1,2)} = 12+12+12+6

then we sum them

{(1,3),(1,3),(1,3),(1,2)} = 42

but the number example is a very small, I predict that for small number, the output will always be bigger than the input. The question is if its feasible for very large number and get a small output than input.
 Pages: [1]