What an excellent idea!! May I ask a humble question, maybe to improve your genius. Why not feed the output of the compression program back into the compression program recursively? You could compress the whole blockchain to be printed in a QR code for backup! Or even the whole Internet!
Possible prior art:
WEB compressor, U.S. Patent 5,533,051,
U.S. Patent 5,488,364, etc. Tell me, is your method patented??
(Forum, please forgive me. I never had the pleasure of suffering these in comp.compression.)Yeah I think its possible, by compressing the compression. Until making the whole blockchain or internet in a QR Core. And if you want to decompress you need let it grow in the program like a growing tree from a small seed. And this if its the compression work.
My method is just a theory written in some pieces of paper, I didnt made any program yet to test to prove if it work.
No I havent patented yet. I have no idea how to proceed for a Moroccan citizen if you have any experiance in patenting paper work.
To be honest I dont care if it patented or not. The important thing for me is make it work and open to everyone. I dont care who will get the credit at the end.
I think the first important step for this method, is to prove the mathematic behind it, the proposition of "Every positif integer is a sum of Superior Highly Composite Number", then prove the summation factorisation algorithm is computationaly feasable for large numbers. And at the end make some opensource program to test it and make it real.
EDIT:Just forget the constraint of dictionary of prime numbers.
Compressing a file, it need a dictionnary of prime numbers. the same for decompressing. so the compression of the blockchain or internet need a large dictionnary of prime numbers in storage so i think its not feasable, maybe if a new formula is discovered to construct prime numbers, like for SHCN in the futur who knows.
I want to frame this and hang it on my wall. Tell me, have you ever considered designing a motor which drives an electrical generator to power the motor? Or a train locomotive which holds a giant magnet in front, to pull the whole train forward? Or a water wheel which powers a pump which moves the water back to the top of the water wheel?
Fascinating ideas.I will grant that your Bitcoin scaling idea is superior to the scaling ideas behind BCH, BU, 2X, and their ilk.Now, you go read Section 9 of the
comp.compression FAQ. Please don’t come back unless you’ve read that, else the forum may never forgive me.
It is mathematically impossible to create a program compressing without loss all files by at least one bit (see below and also item 73 in part 2 of this FAQ). Yet from time to time some people claim to have invented a new algorithm for doing so. Such algorithms are claimed to compress random data and to be applicable recursively, that is, applying the compressor to the compressed output of the previous run, possibly multiple times. Fantastic compression ratios of over 100:1 on random data are claimed to be actually obtained.
[...]
9.2 The counting argument
Theorem:
No program can compress without loss all files of size >= N bits, for any given integer N >= 0.
Proof:
Assume that the program can compress without loss all files of size >= N bits. Compress with this program all the 2^N files which have exactly N bits. All compressed files have at most N-1 bits, so there are at most (2^N)-1 different compressed files [2^(N-1) files of size N-1, 2^(N-2) of size N-2, and so on, down to 1 file of size 0]. So at least two different input files must compress to the same output file. Hence the compression program cannot be lossless.
The proof is called the "counting argument". It uses the so-called pigeon-hole principle: you can't put 16 pigeons into 15 holes without using one of the holes twice.
The overcrowded
pigeons were a hint. I did not waste my time reading your equations. What you are trying to do is
proved mathematically impossible.
By the way, JPEG is a lossy compression format. It throws away information. Your example of a compressed file was a JPEG. Learn the difference before you try designing a compression algorithm.