Bitcoin Forum

Alternate cryptocurrencies => Altcoin Discussion => Topic started by: vpncoin on July 01, 2016, 04:22:09 PM



Title: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 01, 2016, 04:22:09 PM
To moderator gmaxwell:
These source code just for test, it can compile and run in ubuntu and windows,
And it is compatible with bitcoin, does not fork bitcoin.
Please don't move it, thanks.



Hello,
  I am the Vpncoin's dev, nice to meet you.
We invented a blockchain compression algorithm,
It can be reduce about 25% of the disk space and reduce network traffic,
We are happy to share it and it is free,
And the increase of the source code is compatible, will not fork bitcoin,

The core compression algorithm is LZMA (7zip) and LZ4,
Our compression code has been applied on the Vpncoin, and run stable.
If someone want to use these code,
Please indicate the author (Vpncoin development team, Bit Lee).


If you are interested in this, please post here,
I will publish the relevant source code.
Thanks.


https://i.imgur.com/PMuVcad.png


Title: Re: Blockchain compression algorithm
Post by: vpncoin on July 01, 2016, 04:28:42 PM
To cut a long story short, i directly show the source code.

Add code to init.cpp
 
Code:
int dw_zip_block = 0;
int dw_zip_limit_size = 0;
int dw_zip_txdb = 0;

bool AppInit2()
{
...
    // ********************************************************* Step 2: parameter interactions

#ifdef WIN32
    dw_zip_block = GetArg("-zipblock", 1);
#else
    /* 7Zip source code in the Linux system needs to improve, It can work, but sometimes it will crash. */
    dw_zip_block = GetArg("-zipblock", 0);
#endif
    dw_zip_limit_size = GetArg("-ziplimitsize", 64);
    dw_zip_txdb = GetArg("-ziptxdb", 0);
    if( dw_zip_block > 1 ){ dw_zip_block = 1; }
    else if( dw_zip_block == 0 ){ dw_zip_txdb = 0; }

...
}
 

Add code to main.h
 
Code:
extern int bitnet_pack_block(CBlock* block, string& sRzt);
extern bool getCBlockByFilePos(CAutoFile filein, unsigned int nBlockPos, CBlock* block);
extern bool getCBlocksTxByFilePos(CAutoFile filein, unsigned int nBlockPos, unsigned int txId, CTransaction& tx);
extern int dw_zip_block;

class CTransaction
{
...
    bool ReadFromDisk(CDiskTxPos pos, FILE** pfileRet=NULL)
    {
        CAutoFile filein = CAutoFile(OpenBlockFile(pos.nFile, 0, pfileRet ? "rb+" : "rb"), SER_DISK, CLIENT_VERSION);
        if (!filein)
            return error("CTransaction::ReadFromDisk() : OpenBlockFile failed");

        if( dw_zip_block > 0 )
{
//if( fDebug ) printf("CTransaction::ReadFromDisk():: pos.nFile [%d], nBlockPos [%d], nTxPos [%d], pfileRet [%d] \n", pos.nFile, pos.nBlockPos, pos.nTxPos, pfileRet);
getCBlocksTxByFilePos(filein, pos.nBlockPos, pos.nTxPos, *this);
}else{
        // Read transaction
        if (fseek(filein, pos.nTxPos, SEEK_SET) != 0)
            return error("CTransaction::ReadFromDisk() : fseek failed");

        try {
            filein >> *this;
        }
        catch (std::exception &e) {
            return error("%s() : deserialize or I/O error", __PRETTY_FUNCTION__);
        }}

        // Return file pointer
        if (pfileRet)
        {
            if (fseek(filein, pos.nTxPos, SEEK_SET) != 0)
                return error("CTransaction::ReadFromDisk() : second fseek failed");
            *pfileRet = filein.release();
        }
        return true;
    }
...
}

class CBlock
{
...
    bool WriteToDisk(unsigned int& nFileRet, unsigned int& nBlockPosRet, bool bForceWrite = false)
    {
        // Open history file to append
        CAutoFile fileout = CAutoFile(AppendBlockFile(nFileRet), SER_DISK, CLIENT_VERSION);
        if (!fileout)
            return error("CBlock::WriteToDisk() : AppendBlockFile failed");

        // Write index header
        unsigned int nSize = fileout.GetSerializeSize(*this);

        int nSize2 = nSize;
string sRzt;
        if( dw_zip_block > 0 )
        {
// compression blcok +++
nSize = bitnet_pack_block(this, sRzt);  // nSize include 4 byte( block Real size )
// compression blcok +++
        }

        fileout << FLATDATA(pchMessageStart) << nSize;

        // Write block
        long fileOutPos = ftell(fileout);
        if (fileOutPos < 0)
            return error("CBlock::WriteToDisk() : ftell failed");
        nBlockPosRet = fileOutPos;

        if( dw_zip_block == 0 ){ fileout << *this; }
        else{
   //if( fDebug ) printf("main.h Block.WriteToDisk:: nFileRet [%d], nBlockSize [%d], zipBlockSize [%d], nBlockPosRet = [%d] \n", nFileRet, nSize2, nSize, nBlockPosRet);
// compression blcok +++
   if( nSize > 0 ){
fileout.write(sRzt.c_str(), nSize);
}
sRzt.resize(0);
// compression blcok +++
        }

        // Flush stdio buffers and commit to disk before returning
        fflush(fileout);
        if( bForceWrite || (!IsInitialBlockDownload() || (nBestHeight+1) % 500 == 0) )
            FileCommit(fileout);

        return true;
    }

    bool ReadFromDisk(unsigned int nFile, unsigned int nBlockPos, bool fReadTransactions=true)
    {
        SetNull();
unsigned int iPos = nBlockPos;
if( dw_zip_block > 0 ){ iPos = 0; }

        // Open history file to read
        CAutoFile filein = CAutoFile(OpenBlockFile(nFile, iPos, "rb"), SER_DISK, CLIENT_VERSION);
        if (!filein)
            return error("CBlock::ReadFromDisk() : OpenBlockFile failed");
        if (!fReadTransactions)
            filein.nType |= SER_BLOCKHEADERONLY;

        // Read block
        try {
            if( dw_zip_block > 0 )
            {
getCBlockByFilePos(filein, nBlockPos, this);
}else{ filein >> *this; }
        }
        catch (std::exception &e) {
            return error("%s() : deserialize or I/O error", __PRETTY_FUNCTION__);
        }

        // Check the header
        if (fReadTransactions && IsProofOfWork() && !CheckProofOfWork(GetPoWHash(), nBits))
            return error("CBlock::ReadFromDisk() : errors in block header");

        return true;
    }

...
}


Title: Re: Blockchain compression algorithm
Post by: vpncoin on July 01, 2016, 04:31:20 PM
Some correlation function:
 
Code:
#include "lz4/lz4.h"
#include "lzma/LzmaLib.h"

int StreamToBuffer(CDataStream &ds, string& sRzt, int iSaveBufSize)
{
int bsz = ds.size();
int iRsz = bsz;
if( iSaveBufSize > 0 ){ iRsz = iRsz + 4; }
sRzt.resize(iRsz);
char* ppp = (char*)sRzt.c_str();
if( iSaveBufSize > 0 ){ ppp = ppp + 4; }
ds.read(ppp, bsz);
if( iSaveBufSize > 0 ){ *(unsigned int *)(ppp - 4) = bsz; }
return iRsz;
}

int CBlockToBuffer(CBlock *pb, string& sRzt)
{
CDataStream ssBlock(SER_DISK, CLIENT_VERSION);
ssBlock << (*pb);
int bsz = StreamToBuffer(ssBlock, sRzt, 0);
return bsz;
}

int writeBufToFile(char* pBuf, int bufLen, string fName)
{
int rzt = 0;
std::ofstream oFs(fName.c_str(), std::ios::out | std::ofstream::binary);
if( oFs.is_open() )
{
if( pBuf ) oFs.write(pBuf, bufLen);
oFs.close();
rzt++;
}
return rzt;
}

int lz4_pack_buf(char* pBuf, int bufLen, string& sRzt)
{
int worstCase = 0;
int lenComp = 0;
    try{
worstCase = LZ4_compressBound( bufLen );
//std::vector<uint8_t> vchCompressed;   //vchCompressed.resize(worstCase);
sRzt.resize(worstCase + 4);
char* pp = (char *)sRzt.c_str();
lenComp = LZ4_compress(pBuf, pp + 4, bufLen);
if( lenComp > 0 ){ *(unsigned int *)pp = bufLen;   lenComp = lenComp + 4; }
}
    catch (std::exception &e) {
        printf("lz4_pack_buf err [%s]:: buf len %d, worstCase[%d], lenComp[%d] \n", e.what(), bufLen, worstCase, lenComp);
    }
return lenComp;
}

int lz4_unpack_buf(const char* pZipBuf, unsigned int zipLen, string& sRzt)
{
int rzt = 0;
unsigned int realSz = *(unsigned int *)pZipBuf;
if( fDebug )printf("lz4_unpack_buf:: zipLen [%d], realSz [%d],  \n", zipLen, realSz);
sRzt.resize(realSz);
char* pOutData = (char*)sRzt.c_str();

    // -- decompress
rzt = LZ4_decompress_safe(pZipBuf + 4, pOutData, zipLen, realSz);
    if ( rzt != (int) realSz)
    {
            if( fDebug )printf("lz4_unpack_buf:: Could not decompress message data. [%d :: %d] \n", rzt, realSz);
            sRzt.resize(0);
    }
return rzt;
}

int CBlockFromBuffer(CBlock* block, char* pBuf, int bufLen)
{
CDataStream ssBlock(SER_DISK, CLIENT_VERSION);
ssBlock.write(pBuf, bufLen);   int i = ssBlock.size();
ssBlock >> (*block);
return i;
}

int lz4_pack_block(CBlock* block, string& sRzt)
{
int rzt = 0;
string sbf;
int bsz = CBlockToBuffer(block, sbf);
if( bsz > 12 )
{
char* pBuf = (char*)sbf.c_str();
rzt = lz4_pack_buf(pBuf, bsz, sRzt);
//if( lzRzt > 0 ){ rzt = lzRzt; }  // + 4; }
}
sbf.resize(0);
return rzt;
}

int lzma_depack_buf(unsigned char* pLzmaBuf, int bufLen, string& sRzt)
{
int rzt = 0;
unsigned int dstLen = *(unsigned int *)pLzmaBuf;
    sRzt.resize(dstLen);
unsigned char* pOutBuf = (unsigned char*)sRzt.c_str();
    unsigned srcLen = bufLen - LZMA_PROPS_SIZE - 4;
SRes res = LzmaUncompress(pOutBuf, &dstLen, &pLzmaBuf[LZMA_PROPS_SIZE + 4], &srcLen, &pLzmaBuf[4], LZMA_PROPS_SIZE);
if( res == SZ_OK )//assert(res == SZ_OK);
{
//outBuf.resize(dstLen); // If uncompressed data can be smaller
rzt = dstLen;
}else sRzt.resize(0);
if( fDebug ) printf("lzma_depack_buf:: res [%d], dstLen[%d],  rzt = [%d]\n", res, dstLen, rzt);
return rzt;
}

int lzma_pack_buf(unsigned char* pBuf, int bufLen, string& sRzt, int iLevel, unsigned int iDictSize)  // (1 << 17) = 131072 = 128K
{
int res = 0;
int rzt = 0;
unsigned propsSize = LZMA_PROPS_SIZE;
unsigned destLen = bufLen + (bufLen / 3) + 128;
    try{
sRzt.resize(propsSize + destLen + 4);
unsigned char* pOutBuf = (unsigned char*)sRzt.c_str();

res = LzmaCompress(&pOutBuf[LZMA_PROPS_SIZE + 4], &destLen, pBuf, bufLen, &pOutBuf[4], &propsSize,
                                      iLevel, iDictSize, -1, -1, -1, -1, -1);  // 1 << 14 = 16K, 1 << 16 = 64K
  
//assert(propsSize == LZMA_PROPS_SIZE);
//assert(res == SZ_OK);
if( (res == SZ_OK) && (propsSize == LZMA_PROPS_SIZE) )  
{
//outBuf.resize(propsSize + destLen);
*(unsigned int *)pOutBuf = bufLen;
rzt = propsSize + destLen + 4;
}else sRzt.resize(0);

}
    catch (std::exception &e) {
        printf("lzma_pack_buf err [%s]:: buf len %d, rzt[%d] \n", e.what(), bufLen, rzt);
    }
if( fDebug ) printf("lzma_pack_buf:: res [%d], propsSize[%d], destLen[%d],  rzt = [%d]\n", res, propsSize, destLen, rzt);
return rzt;
}

int lzma_pack_block(CBlock* block, string& sRzt, int iLevel, unsigned int iDictSize)  // (1 << 17) = 131072 = 128K
{
int rzt = 0;
string sbf;
int bsz = CBlockToBuffer(block, sbf);
if( bsz > 12 )
{
unsigned char* pBuf = (unsigned char*)sbf.c_str();
rzt = lzma_pack_buf(pBuf, bsz, sRzt, iLevel, iDictSize);
//if( lzRzt > 0 ){ rzt = lzRzt; }  // + 4; }
}
sbf.resize(0);
return rzt;
}

int bitnet_pack_block(CBlock* block, string& sRzt)
{
if( dw_zip_block == 1 )  return lzma_pack_block(block, sRzt, 9, uint_256KB);
else if( dw_zip_block == 2 ) return lz4_pack_block(block, sRzt);
}

bool getCBlockByFilePos(CAutoFile filein, unsigned int nBlockPos, CBlock* block)
{
bool rzt = false;
int ips = nBlockPos - 4;  // get ziped block size;
if (fseek(filein, ips, SEEK_SET) != 0)
return error("getCBlockByFilePos:: fseek failed");
filein >> ips; // get ziped block size;
if( fDebug )printf("getCBlockByFilePos:: ziped block size [%d] \n", ips);
string s;   s.resize(ips);   char* pZipBuf = (char *)s.c_str();
filein.read(pZipBuf, ips);
string sUnpak;
int iRealSz;
if( dw_zip_block == 1 ) iRealSz = lzma_depack_buf((unsigned char*)pZipBuf, ips, sUnpak);
else if( dw_zip_block == 2 ) iRealSz = lz4_unpack_buf(pZipBuf, ips - 4, sUnpak);
if( fDebug )printf("getCBlockByFilePos:: ziped block size [%d], iRealSz [%d] \n", ips, iRealSz);
if( iRealSz > 0 )
{
pZipBuf = (char *)sUnpak.c_str();
rzt = CBlockFromBuffer(block, pZipBuf, iRealSz) > 12;
/*if( fDebug ){
if( block->vtx.size() < 10 )
{
printf("\n\n getCBlockByFilePos:: block info (%d): \n", rzt);
block->print();
}else printf("\n\n getCBlockByFilePos:: block vtx count (%d) is too large \n", block->vtx.size());
}*/
}
s.resize(0);   sUnpak.resize(0);
return rzt;
}

bool getCBlocksTxByFilePos(CAutoFile filein, unsigned int nBlockPos, unsigned int txId, CTransaction& tx)
{
bool rzt = false;
CBlock block;
rzt = getCBlockByFilePos(filein, nBlockPos, &block);
if( rzt )
{
if( block.vtx.size() > txId )
{
tx = block.vtx[txId];
if( fDebug ){
printf("\n\n getCBlocksTxByFilePos:: tx info: \n");
tx.print(); }
}else rzt = false;
}
return rzt;
}


Title: Re: Bitcoin based [Suspicious link removed]pression algorithm
Post by: vetpet on July 02, 2016, 04:06:59 PM
I can't understand, does it could be used with bitcoin core?


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: Barnabe on July 02, 2016, 05:54:30 PM
Compression features usually come with an increase of computational power. Have you done any tests to see how much more CPU power would someone need to run this and how much disk space would be saved ?


Title: Re: Bitcoin based [Suspicious link removed]pression algorithm
Post by: vpncoin on July 02, 2016, 08:06:47 PM
I can't understand, does it could be used with bitcoin core?

Yes.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 02, 2016, 08:23:16 PM
Compression features usually come with an increase of computational power. Have you done any tests to see how much more CPU power would someone need to run this and how much disk space would be saved ?

The compression algorithm has been used in Vpncoin with LZMA(7zip),
In the windows environment, the impact of CPU can be ignored,
It can save 20%~25% and even more disk space,
As you know, the more content, the higher the compression rate,
The bigger the block, the higher the compression rate,
I think there's a higher compression rate on bitcoin,
Because bitcoin's block is relatively large.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: amaclin on July 03, 2016, 10:36:47 AM
In the windows environment, the impact of CPU can be ignored,
It can save 20%~25% and even more disk space,

My harddisk is 1TB
Right now 328 GB is free space. I think that this would be enough for 3-5 years for me.
What is a reason to compress blockchain and increase the work for CPU?
I see no reasons for compressing data on disk.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: elbandi on July 03, 2016, 11:25:49 AM
Compression features usually come with an increase of computational power. Have you done any tests to see how much more CPU power would someone need to run this and how much disk space would be saved ?
i store some blockchain data in squashfs (with xz compression), it store 75G data in a 57G file, so compression rate is 24%.
cpu usage is indiscernible.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 03, 2016, 11:57:05 AM
Compression features usually come with an increase of computational power. Have you done any tests to see how much more CPU power would someone need to run this and how much disk space would be saved ?
i store some blockchain data in squashfs (with xz compression), it store 75G data in a 57G file, so compression rate is 24%.
cpu usage is indiscernible.

This algorithm is dynamic compression and decompression of each block,
The greater the block, the higher the compression rate.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 03, 2016, 12:00:31 PM
In the windows environment, the impact of CPU can be ignored,
It can save 20%~25% and even more disk space,

My harddisk is 1TB
Right now 328 GB is free space. I think that this would be enough for 3-5 years for me.
What is a reason to compress blockchain and increase the work for CPU?
I see no reasons for compressing data on disk.

This algorithm not only saves disk space, it can also save the same network traffic. Double :)


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: Barnabe on July 04, 2016, 08:36:13 PM
In the windows environment, the impact of CPU can be ignored,
It can save 20%~25% and even more disk space,

My harddisk is 1TB
Right now 328 GB is free space. I think that this would be enough for 3-5 years for me.
What is a reason to compress blockchain and increase the work for CPU?
I see no reasons for compressing data on disk.

This algorithm not only saves disk space, it can also save the same network traffic. Double :)
It's true that disk space is not a big problem for nodes, but traffic is ! Saving 25%+ on traffic can be really interesting !
I guess it sends compressed blocks to other clients using the same protocol.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 04, 2016, 11:04:37 PM
In the windows environment, the impact of CPU can be ignored,
It can save 20%~25% and even more disk space,

My harddisk is 1TB
Right now 328 GB is free space. I think that this would be enough for 3-5 years for me.
What is a reason to compress blockchain and increase the work for CPU?
I see no reasons for compressing data on disk.

This algorithm not only saves disk space, it can also save the same network traffic. Double :)
It's true that disk space is not a big problem for nodes, but traffic is ! Saving 25%+ on traffic can be really interesting !
I guess it sends compressed blocks to other clients using the same protocol.

Yes, you're right.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 06, 2016, 02:22:05 AM
Original Bitcoin Genesis Block Hex Code:
https://i.imgur.com/YAb1B9G.jpg

Compression Bitcoin Genesis Block Hex Code:
https://i.imgur.com/L3GPvcd.jpg


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 06, 2016, 11:27:40 AM
Yesterday, I ported the compression algorithm code to bitcoin version 0.8.6,
The compression effect is obvious.

In order to view the compression effect, i changed the functions inside main.cpp.
Each block file (blkxxxxx.dat) contains 10,000 blocks.

Code:
FindBlockPos bool (&state CValidationState, &pos CDiskBlockPos, int nAddSize unsigned, int nHeight unsigned, nTime Uint64, fKnown bool = false)
{
...
        /* while (infoLastBlockFile.nSize + nAddSize >= MAX_BLOCKFILE_SIZE) { */
        if( ((nHeight / 10000) > 0) && ((nHeight % 10000) == 0) ) {
            printf("nHeight = [%d], Leaving block file %i: %s\n", nHeight, nLastBlockFile, infoLastBlockFile.ToString().c_str());
            FlushBlockFile(true);
            nLastBlockFile++;
            infoLastBlockFile.SetNull();
            pblocktree->ReadBlockFileInfo(nLastBlockFile, infoLastBlockFile); // check whether data for the new file somehow already exist; can fail just fine
            fUpdatedLast = true;
        }
...
}

Here is the test data:

blk00000.dat (include 0 ~ 9999 blocks), Original size is 2,318,345 bytes, After compression is 2,116,328 bytes, compression ratio is 8.7%,
blk00001.dat (include 10000 ~ 19999 blocks), Original size is 2,303,141 bytes, After compression is 2,103,239 bytes, compression ratio is 8.6%,
blk00002.dat (include 20000 ~ 29999 blocks), Original size is 2,440,262 bytes, After compression is 2,224,608 bytes, compression ratio is 8.8%,
blk00003.dat (include 30000 ~ 39999 blocks), Original size is 2,500,372 bytes, After compression is 2,278,627 bytes, compression ratio is 8.86%,
blk00004.dat (include 40000 ~ 49999 blocks), Original size is 2,775,946 bytes, After compression is 2,527,266 bytes, compression ratio is 8.95%,
blk00005.dat (include 50000 ~ 59999 blocks), Original size is 4,611,316 bytes, After compression is 3,927,464 bytes, compression ratio is 14.8%,
blk00006.dat (include 60000 ~ 69999 blocks), Original size is 6,788,315 bytes, After compression is 5,763,507 bytes, compression ratio is 15%,
blk00007.dat (include 70000 ~ 79999 blocks), Original size is 8,111,206 bytes, After compression is 6,493,703 bytes, compression ratio is 19.9%,
blk00008.dat (include 80000 ~ 89999 blocks), Original size is 7,963,189 bytes, After compression is 7,048,131 bytes, compression ratio is 11.49%,
blk00009.dat (include 90000 ~ 99999 blocks), Original size is 20,742,813 bytes, After compression is 13,708,206 bytes, compression ratio is 33.9%,
blk00010.dat (include 100000 ~ 109999 blocks), Original size is 23,122,509 bytes, After compression is 19,481,570 bytes, compression ratio is 15.7%,
blk00011.dat (include 110000 ~ 119999 blocks), Original size is 50,681,392 bytes, After compression is 40,918,962 bytes, compression ratio is 19.2%,
blk00012.dat (include 120000 ~ 129999 blocks), Original size is 107,469,564 bytes, After compression is 88,319,322 bytes, compression ratio is 17.8%,
blk00013.dat (include 130000 ~ 139999 blocks), Original size is 231,631,119 bytes, After compression is 188,562,481 bytes, compression ratio is 18.59%,
blk00014.dat (include 140000 ~ 149999 blocks), Original size is 215,720,950 bytes, After compression is 174,676,348 bytes, compression ratio is 19%,
blk00015.dat (include 150000 ~ 159999 blocks), Original size is 173,452,632 bytes, After compression is 139,074,101 bytes, compression ratio is 19.8%,
blk00016.dat (include 160000 ~ 169999 blocks), Original size is 212,377,235 bytes, After compression is 164,287,461 bytes, compression ratio is 22.6%,
blk00017.dat (include 170000 ~ 179999 blocks), Original size is 263,652,393 bytes, After compression is 205,578,322 bytes, compression ratio is 22%,
blk00018.dat (include 180000 ~ 189999 blocks), Original size is 887,112,287 bytes, After compression is 612,296,114 bytes, compression ratio is 30.9%,
blk00019.dat (include 190000 ~ 199999 blocks), Original size is 925,036,513 bytes, After compression is 638,670,092 bytes, compression ratio is 30.9%,


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: pedrog on July 06, 2016, 01:51:34 PM
But is it better than Pied Piper's compression algorithm?


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 06, 2016, 02:27:46 PM
But is it better than Pied Piper's compression algorithm?

I don't have compare other people's algorithms,
Maybe you can make a comparison to see which of the compression ratio is higher.

Moreover, our algorithm can save the same network traffic through the transmission compressed blocks protocol.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: achow101 on July 06, 2016, 02:48:08 PM
If you want to see this in Bitcoin Core, I suggest that you open a pull request with your changes at https://github.com/bitcoin/bitcoin/pulls. Then see how the discussion goes with the actual developers of Core. However, since 0.13 has reached its feature freeze, your change would not make it to a release until 0.14 at the earliest, which will be in roughly 6 months.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 06, 2016, 03:16:52 PM
If you want to see this in Bitcoin Core, I suggest that you open a pull request with your changes at https://github.com/bitcoin/bitcoin/pulls. Then see how the discussion goes with the actual developers of Core. However, since 0.13 has reached its feature freeze, your change would not make it to a release until 0.14 at the earliest, which will be in roughly 6 months.

Yes, we hope that the bitcoin development team use this compression algorithm,
Thank you very much for your advice.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: TransaDox on July 07, 2016, 12:08:28 PM
Hashes and encryption are extremely resistive to ASCII based compression algorithms so it is likely you are at best compressing the script and small transactions may actually get bigger. Have you reached out the the UPX developers for a look at binary compression?


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 07, 2016, 12:27:57 PM
Hashes and encryption are extremely resistive to ASCII based compression algorithms so it is likely you are at best compressing the script and small transactions may actually get bigger. Have you reached out the the UPX developers for a look at binary compression?

Our compression algorithm is binary compression,
The core algorithm is LZMA, it is better than LZ4.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: icey on July 07, 2016, 08:11:14 PM
But is it better than Pied Piper's compression algorithm?

 :D Not sure he got it


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: cloverme on July 08, 2016, 07:30:49 PM
But is it better than Pied Piper's compression algorithm?
Best thing I read all day lol

I bet Hooli did an under the table deal with Lempel–Ziv on this lossless algo we're seeing here  ;D


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 08, 2016, 11:03:48 PM
But is it better than Pied Piper's compression algorithm?
Best thing I read all day lol

I bet Hooli did an under the table deal with Lempel–Ziv on this lossless algo we're seeing here  ;D

Our compression algorithm is also a lossless compression,
We tested on the bitcoin, compression and decompression about 200,000 blocks without any problem,
And Blockchain compression rates may be the highest,
And it's not just compression.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: cr1776 on July 09, 2016, 02:30:43 AM
But is it better than Pied Piper's compression algorithm?
Best thing I read all day lol

I bet Hooli did an under the table deal with Lempel–Ziv on this lossless algo we're seeing here  ;D

Our compression algorithm is also a lossless compression,
We tested on the bitcoin, compression and decompression about 200,000 blocks without any problem,
And Blockchain compression rates may be the highest,
And it's not just compression.

Like knightdk said, the best way to get it into Bitcoin Core is to do a pull request.  A few other suggestions:
1. If you create unit-tests and/or other tests that will also increase the speed at which it is evaluated and potentially merged.

2. If you were to create an option to enable it conditionally for both disk compression and network compression, so that it could be running on some nodes that were testing it vs an all-or-nothing approach to test it, I also think that would increase the likelihood of adoption.

3. Similarly, for the network bandwidth there might need to be a way for nodes running this compression code to identify each other so that it could serve compressed data when possible and uncompressed data (e.g. for non-upgraded nodes) at other times (this may already be implemented, I didn't check the code).  Or alternatively be able to send either compressed or uncompressed dynamically through some type of negotiation.

4. You might also consider that some people (e.g. miners) might choose to serve uncompressed blocks when they have found a new block so as to minimize latency.  e.g.  I would serve compressed blocks for old blocks, but for one I just found, I might just send it uncompressed to avoid the time taken to compress it.  It could be a trade off - is it faster to send an uncompressed block or to compress and then send.  Much would depend on the network speed of connected nodes and I am not sure if anyone has tested which is faster to do.

In short, the more groundwork that is done, the better.

In general, on disk and sending old blocks that are compressed is a nice feature.  

My thoughts.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: Velkro on July 09, 2016, 09:54:42 AM
Very interesting stuff. There are cases where hdd space for node is not a problem as someone mentioned above, but there are cases this is problem. Same with traffic, some have problem with it, some don't.
It would be great to allow to choose in configuration to use compression on your node (if hdd space is problem) or not (if you have a lot of free hdd space). Similiar to GZIP in web/browser traffic.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: AlexGR on July 09, 2016, 06:31:32 PM
In the windows environment, the impact of CPU can be ignored,
It can save 20%~25% and even more disk space,

My harddisk is 1TB
Right now 328 GB is free space. I think that this would be enough for 3-5 years for me.
What is a reason to compress blockchain and increase the work for CPU?
I see no reasons for compressing data on disk.

Fast compression algorithms can trade very slow I/O time, for much faster cpu time - thus speeding things up in general.

For example, if 100 mb can go down to 80, reducing read, write, seek operations, the overall time of a process involving disk I/O on these 80mb can be sped up significantly.

Now if you have a much better compression algorithm that takes 100mb to 60mb but takes a couple seconds instead of msecs to do the job in order to gain the extra -20mb of compression - that doesn't work to speed things up in local operations. It could be still useful though for network transmission because those 20mbs could take more than a few seconds to get transmitted.

Another reason is that if you are marginal with the blockchain in a particular storage medium, especially SSDs, you can then fit the blockchain after compressing it. That will allow you to exploit the faster SSD instead of going with the slower mechanical.

The blockchain won't fit in a 64gb SSD right now, but it can fit compressed (say a BTRFS partition with compression) - although it doesn't have much headroom for SSD operations that require some free space. In some time, it won't fit in a properly configured 120gb ssd running an OS (assuming it has like 40gb for linux, swap, and free space that SSDs require) but it will fit if the data is compressed. The gains there are exclusively from using the faster SSD over the mechanical disk.

In any case, from searching the dev mailing list in older discussions I think some devs are reserved in using popular compression algorithms due to security concerns and exploits.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 14, 2016, 07:00:09 AM
But is it better than Pied Piper's compression algorithm?
Best thing I read all day lol

I bet Hooli did an under the table deal with Lempel–Ziv on this lossless algo we're seeing here  ;D

Our compression algorithm is also a lossless compression,
We tested on the bitcoin, compression and decompression about 200,000 blocks without any problem,
And Blockchain compression rates may be the highest,
And it's not just compression.

Like knightdk said, the best way to get it into Bitcoin Core is to do a pull request.  A few other suggestions:
1. If you create unit-tests and/or other tests that will also increase the speed at which it is evaluated and potentially merged.

2. If you were to create an option to enable it conditionally for both disk compression and network compression, so that it could be running on some nodes that were testing it vs an all-or-nothing approach to test it, I also think that would increase the likelihood of adoption.

3. Similarly, for the network bandwidth there might need to be a way for nodes running this compression code to identify each other so that it could serve compressed data when possible and uncompressed data (e.g. for non-upgraded nodes) at other times (this may already be implemented, I didn't check the code).  Or alternatively be able to send either compressed or uncompressed dynamically through some type of negotiation.

4. You might also consider that some people (e.g. miners) might choose to serve uncompressed blocks when they have found a new block so as to minimize latency.  e.g.  I would serve compressed blocks for old blocks, but for one I just found, I might just send it uncompressed to avoid the time taken to compress it.  It could be a trade off - is it faster to send an uncompressed block or to compress and then send.  Much would depend on the network speed of connected nodes and I am not sure if anyone has tested which is faster to do.

In short, the more groundwork that is done, the better.

In general, on disk and sending old blocks that are compressed is a nice feature.  

My thoughts.


Thanks


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 14, 2016, 07:06:29 AM
Recently I will upload the complete source code to GitHub,
It is integrated in the bitcoin version 0.8.6   .


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 22, 2016, 03:52:57 PM
Bitcoin version 0.8.6 integrated blockchain compression algorithm source code has been upload:
https://github.com/Bit-Net/Bitcoin-0.8.6 (https://github.com/Bit-Net/Bitcoin-0.8.6)

https://i.imgur.com/PMuVcad.jpg


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: MadCow on July 23, 2016, 07:18:13 AM
Recently I will upload the complete source code to GitHub,
It is integrated in the bitcoin version 0.8.6   .

is the new wallet ready yet?


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 24, 2016, 08:52:54 AM
is the new wallet ready yet?

Bitcoin version 0.8.6 integrated blockchain compression algorithm source code has been upload:
https://github.com/Bit-Net/Bitcoin-0.8.6 (https://github.com/Bit-Net/Bitcoin-0.8.6)

These source code just for test, it can compile and run in ubuntu and windows,
And it is compatible with the bitcoin, does not fork bitcoin.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: achow101 on July 24, 2016, 01:16:23 PM
is the new wallet ready yet?

Bitcoin version 0.8.6 integrated blockchain compression algorithm source code has been upload:
https://github.com/Bit-Net/Bitcoin-0.8.6 (https://github.com/Bit-Net/Bitcoin-0.8.6)

These source code just for test, it can compile and run in ubuntu and windows,
And it is compatible with the bitcoin, does not fork bitcoin.
Why don't you try integrating this into the master branch of Bitcoin? 0.8.6 is incredibly old and outdated software and should not be used anymore.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: gmaxwell on July 24, 2016, 07:19:22 PM
0.8.6 is what most altcoins are based on, it is an old codebase with many vulnerabilities. Also look at the screenshots. This is in the wrong subforum.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: JohnnyBTCSeed on July 24, 2016, 07:47:09 PM
0.8.6 is what most altcoins are based on, it is an old codebase with many vulnerabilities. Also look at the screenshots. This is in the wrong subforum.
https://s31.postimg.org/4gjl9l3ij/Gregory_Maxwell.jpg

You got served


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 25, 2016, 12:21:59 AM
is the new wallet ready yet?

Bitcoin version 0.8.6 integrated blockchain compression algorithm source code has been upload:
https://github.com/Bit-Net/Bitcoin-0.8.6 (https://github.com/Bit-Net/Bitcoin-0.8.6)

These source code just for test, it can compile and run in ubuntu and windows,
And it is compatible with the bitcoin, does not fork bitcoin.
Why don't you try integrating this into the master branch of Bitcoin? 0.8.6 is incredibly old and outdated software and should not be used anymore.

As you know, a new code need more time to study,
I am familiar with the 0.8.6 version, it is easy to transplant the compression algorithm code to 0.8.6 version.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: vpncoin on July 25, 2016, 12:33:50 AM
0.8.6 is what most altcoins are based on, it is an old codebase with many vulnerabilities. Also look at the screenshots. This is in the wrong subforum.

You guy hurt my heart,  >:(
I offer my ideas for free, and provide the source code that can be compiled,
It is used to test and prove that this compression algorithm is possible,
I'm doing this for free, I just hope to help bitcoin, make it better.


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: MadCow on July 25, 2016, 02:51:01 AM
0.8.6 is what most altcoins are based on, it is an old codebase with many vulnerabilities. Also look at the screenshots. This is in the wrong subforum.

You guy hurt my heart,  >:(
I offer my ideas for free, and provide the source code that can be compiled,
It is used to test and prove that this compression algorithm is possible,
I'm doing this for free, I just hope to help bitcoin, make it better.

Don't be put off, everyone in crypto has their own bias & agenda depending on what coins they own, what projects they're following, and who pays their wages. Keep doing your own stuff!


Title: Re: Bitcoin based Blockchain compression algorithm
Post by: achow101 on July 25, 2016, 02:57:06 AM
As you know, a new code need more time to study,
I am familiar with the 0.8.6 version, it is easy to transplant the compression algorithm code to 0.8.6 version.
Well if you want this to be adopted by Bitcoin Core, you will have to put it into the master branch. As I said earlier, 0.8.6 is old and outdated. It is 5 major releases behind, during which time the code has been refactored, changed, and shuffled around. Furthermore 0.8.6 has some vulnerabilities in it that were solved in later releases.

If you want to be taken seriously, make it for the latest master. No one cares about what you can do, clearly you can program and read code, it shouldn't be hard to figure out where to put it in the master branch. Bitcoin Core is very well documented and commented.

0.8.6 is what most altcoins are based on, it is an old codebase with many vulnerabilities. Also look at the screenshots. This is in the wrong subforum.

You guy hurt my heart,  >:(
I offer my ideas for free, and provide the source code that can be compiled,
It is used to test and prove that this compression algorithm is possible,
I'm doing this for free, I just hope to help bitcoin, make it better.
Almost everyone who contributes to Bitcoin Core does it for free. Sure the compression algo is possible on 0.8.6, but will it still work on master? Why don't you try it out.

I do agree though that gmaxwell shouldn't have moved this as it is about Bitcoin Core, albeit an old version.