Bitcoin Forum
June 14, 2024, 02:28:51 PM *
News: Voting for pizza day contest
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3] 4 »  All
  Print  
Author Topic: Chinese community consensus to stay with Core 1MB. Meeting held Jan 24 Shenzen,  (Read 2734 times)
CIYAM
Legendary
*
Offline Offline

Activity: 1890
Merit: 1078


Ian Knowles - CIYAM Lead Developer


View Profile WWW
January 24, 2016, 03:49:24 PM
 #41

BTW - notice that @franky1 *still refuses to show us his code*.

Does anyone on this forum actually believe the guy *can code*?

(other than function declarations in VB)

With CIYAM anyone can create 100% generated C++ web applications in literally minutes.

GPG Public Key | 1ciyam3htJit1feGa26p2wQ4aw6KFTejU
bargainbin
Full Member
***
Offline Offline

Activity: 126
Merit: 100



View Profile
January 24, 2016, 03:50:51 PM
 #42

... increasing block sizes will actually end up centralising the mining more than anything else ...

How is this still a thing, when 9 guys control >90% of the hashpower?

If you are talking about pools then you should know that the hashers can change pools at any time.

They can, but they don't. What's your point? You also missed a chunk:
Quote
Also how do people manage to equate "China no longer having a huge advantage because cheap/subsidized power" and "mining centralization"?
In light of most hashpower being in China?
franky1
Legendary
*
Offline Offline

Activity: 4256
Merit: 4522



View Profile
January 24, 2016, 03:52:48 PM
 #43


There is a very good reason that the people you listed (apart from the last one) don't do that.

That is because they know it is really of no benefit to the future of Bitcoin just to increase the block size.


ok an answer.. finally..
but is there anything bad with having several implementations that do the same job released by different people??

EG spring 2016
luke_jr made an exact replica line for line of core-0.12 but with the blocksize being set to 2mb.. no other dirty code.
YOU made an exact replica line for line of core-0.12 but with the blocksize being set to 2mb.. no other dirty code.
adam back made an exact replica line for line of core-0.12 but with the blocksize being set to 2mb.. no other dirty code.
jeff garzig made an exact replica line for line of core-0.12 but with the blocksize being set to 2mb.. no other dirty code.
Gmaxwell made an exact replica line for line of core-0.12 but with the blocksize being set to 2mb.. no other dirty code.

and then
summer 2016
adam back releases 2mb segwit, (fully compatible and communicates to lukejr, YOU, jeff, greg and adams previous version)

would that be a hostile takeover, would it be preventing anything in adam backs roadmap ??

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
CIYAM
Legendary
*
Offline Offline

Activity: 1890
Merit: 1078


Ian Knowles - CIYAM Lead Developer


View Profile WWW
January 24, 2016, 03:53:12 PM
 #44

They can, but they don't. What's your point?

Then blame the hashers rather than the pools.

As there are far more hashers than there are pools you'd think that the hashers would perhaps care more about decentralising - but if they don't then there isn't much you can do to fix it.

With CIYAM anyone can create 100% generated C++ web applications in literally minutes.

GPG Public Key | 1ciyam3htJit1feGa26p2wQ4aw6KFTejU
watashi-kokoto (OP)
Sr. Member
****
Offline Offline

Activity: 682
Merit: 269



View Profile
January 24, 2016, 03:54:55 PM
 #45

I wonder if this C++ code does qualify me for the honorary Satoshi Nakamoto title  Grin


Code:

// Copyright (c) 2016 Satoshi Nakamoto
// Distributed under the MIT/X11 software license, see the accompanying
// file license.txt or http://www.opensource.org/licenses/mit-license.php.


#include <stdio.h>
#include <gmp.h>
#include<math.h>
/* make sure to invoke gcc with -lgmp */


// NEVER i repeat NEVER use a dead big number variable

struct res {
int rh; // height
// each return from recursion increases h by one
// deepest solution returns zero
// no solution returns SOLUTIONNOTFOUND

//logarithm of no. of variables in the level having the most variables)
double e; // maximum entropy per round

long  unsigned long p0;
long  unsigned long p1;
long  unsigned long p2;
};

void nxt(mpz_t r, mpz_t a) {
mpz_t s,t;
mpz_init(s);
mpz_init(t);
mpz_sub_ui(s, a, 1); // bcsub
mpz_mul(t, a, s); // bcmul
mpz_div_2exp(r, t, 1); // bcdiv 2
mpz_clear(s);
mpz_clear(t);
}

int
compare_doubles (const double da, const double db)
{

  return (da > db) - (da < db);
}

static int SUCCESS = 0;
static int SOLUTIONNOTFOUND = 9999999;

//static struct res oracle;


static long  unsigned long longlongbitsminus1;

// h .. number of already pushed hashes to signature
// s .. NOT the number of objects in this layer  !!! see d
// e .. the already signed entropy (bits)
// d .. how many divisions by two need to be performed on s to get actual number of objects in this layer
// d .. This is optimalization to save memory
// b .. this is burst. during a burst (>0) cannot do growth rounds, only merkle reduce rounds
// .... this is an optimalization because merkle reduce rounds tend to appear in bursts

// m .. the total maximum length of message that needs to be signed
static long double messagedigest;
static int burstdefault;

static double maxentropyperlevel = 0.0;

// ooo and sticky are related to burst
// they activate bursts

static int beancounter = 0;

//int debug() {
// return oracle.rh != SOLUTIONNOTFOUND;
//}

//void nextoracle() {
//
/// oracle.p0 /= 2;
// oracle.p0 |= oracle.p1 << longlongbitsminus1;
// oracle.p1 /= 2;
// oracle.p1 |= oracle.p2 << longlongbitsminus1;
// oracle.p2 /= 2;
//
//}

struct res findt(int h, mpz_t s, long  double e, unsigned d, int ooo, int stick) {
beancounter++;

// int oracbit = oracle.p0 & 1;
// nextoracle();

struct res x;
double n = 0.;


if (h < 0) {
// maximum hashes heuristic cap exceeded

// Terminate without a solution

x.rh = SOLUTIONNOTFOUND;
x.e = 0;
x.p0 = 0;
x.p1 = 0;
x.p2 = 0;
return x;
}

int mustreducetopk = 0;
int canreducebemoreorfour = 0;

if ((e >= messagedigest) ) { // must
mustreducetopk = 1;
}

// here I must compare number of variables with the cached division
// to see if we have less than 8 variables in this level
//(after level with 8 variables comes a level with 4 variables-the public key level)
// in this case the algorithm will terminate and return a valid solution(iff all entropy is signed)
unsigned long int cacheddiv = 8;
cacheddiv <<= d;

int cmp = mpz_cmp_ui(s, cacheddiv);

if (cacheddiv == 0) {

// OVERFLOW OF DIVIDER
// Terminate without a solution here?

int terminateaftertoomanymerklereductions = 0;

if (terminateaftertoomanymerklereductions){

x.rh = SOLUTIONNOTFOUND;
x.e = 0;
x.p0 = 0;
x.p1 = 0;
x.p2 = 0;
return x;
}


// approximate double based compute

double huge = mpz_get_d (s);

double ex = 8.0 * exp2 ((double)d);



cmp = compare_doubles(huge, ex);


}




if ((cmp == 0) || (cmp > 0)) {
canreducebemoreorfour = 1;
}



if (mustreducetopk && !canreducebemoreorfour) {

// solution found. 0
x.rh = SUCCESS;
x.e = 0;
x.p0 = 0;
x.p1 = 0;
x.p2 = 0;
return x;
}

if (mustreducetopk || canreducebemoreorfour) {

// no entropy signed when reducing
n = 0;

// // print
/// mpz_out_str(stdout, 10, b);  putchar('\n');
/// printf("%lf \n", n);


// counter of the burst
int ppp = ooo-1;
if (ppp <= 0) {
ppp = 0;
}


// if ((oracle.rh == SOLUTIONNOTFOUND) || (oracbit == 1)) {

// recurse
x = findt(h-1, s, e+n, d+1, ppp, 1);

// } else {
// x.rh = SOLUTIONNOTFOUND;
// x.e = 0;
// x.p0 = 0;
// x.p1 = 0;
// x.p2 = 0;
// }

// add this level to total length
x.rh++;

// mark 1 bit
x.p2 *= 2;
x.p2 |= x.p1 >> longlongbitsminus1;
x.p1 *= 2;
x.p1 |= x.p0 >> longlongbitsminus1;
x.p0 *= 2;
x.p0++;


if (mustreducetopk) {
return x;
}


} else {
x.rh = SOLUTIONNOTFOUND;
x.e = 0;
x.p0 = 0;
x.p1 = 0;
x.p2 = 0;
}

// if not burst
if (ooo == 0) {
mpz_t b, t;


mpz_init(b);

struct res y;

//
// printf("~~~~%i ~~~~ \n",beancounter);

// explode
if (d > 0) {
// here i apply the cached merkle reduce rounds



// if (debug()) {
// // print
// mpz_out_str(stdout, 10, s);  putchar('\n');
// }



mpz_init(t);


if (1) {
mpz_t xx;
mpz_init(xx);
mpz_t gg;
mpz_init(gg);
mpz_t oo;
mpz_init(oo);
mpz_t pp;
mpz_init(pp);

mpz_set_ui(xx, 1);
mpz_mul_2exp(gg, xx, d);
mpz_sub_ui(oo,gg,1);

// if (debug()) {
// // print
// mpz_out_str(stdout, 10, oo);  putchar('\n');
// }


mpz_add(pp,oo,s);



mpz_div_2exp(t, pp, d);


// mpz_div_2exp(t, s, d);

// don't forget to free the big number
mpz_clear(pp);
// don't forget to free the big number
mpz_clear(oo);
// don't forget to free the big number
mpz_clear(gg);
// don't forget to free the big number
mpz_clear(xx);

} else {
mpz_div_2exp(t, s, d);
}


// if (debug()) {
// // print
// mpz_out_str(stdout, 10, t);  putchar('\n');
// }

// here I calculate the entropy from t
n = (mpz_get_d (t));

} else {

// print
// mpz_out_str(stdout, 10, s);  putchar('\n');

// here I calculate the entropy from t
n = (mpz_get_d (s));

}

// if (debug()) {
// // Print entropy
// printf("|%i|%lf \n", beancounter,n);
// }


n = log2(n);


// explode
if (d > 0) {
nxt(b, t);
mpz_clear(t);
} else {
nxt(b, s);
}

// print
// mpz_out_str(stdout, 10, b);  putchar('\n');

// here I calculate the entropy from b
n = (mpz_get_d (b));
// if (debug()) {
// // Print entropy
// printf("|%i|%lf \n", beancounter,n);
// }
n = log2(n);


// while (1) {
// sleep(1);
// }

// check if it exceeds the per-round entropy treshold cap
if ((n > maxentropyperlevel) && (maxentropyperlevel != 0.0)) {




// don't forget to free the big number
mpz_clear(b);

// don't do this
return x;
}


// // print

// printf("%lf \n", n);


// burst counter
int ppp = ooo;
if (stick == 1) {
ppp = burstdefault;
}

// If x branch was successful, this branch should not be longer.
if (x.rh != SOLUTIONNOTFOUND) {
h = x.rh+1;
}

// // print
// printf("|%i|%lf \n", beancounter,n);

// if ((oracle.rh == SOLUTIONNOTFOUND) || (oracbit == 0)) {
// // recurse
y = findt(h-1, b, e+n, 0, ppp, stick);
// } else {
// y.rh = SOLUTIONNOTFOUND;
// y.e = 0;
// y.p0 = 0;
// y.p1 = 0;
// y.p2 = 0;
// }

// free big number
mpz_clear(b);

// just check if solution
if (y.rh == SOLUTIONNOTFOUND) {
return x;
}

// add this level to total length
y.rh++;

if (y.e < n) {

y.e = n;

// // print
// printf("|%i|%lf \n", beancounter,n);


// while (1) {
// sleep(1);
// }

}


// mark 0 bit
y.p2 *= 2;
y.p2 |= y.p1 >> longlongbitsminus1;
y.p1 *= 2;
y.p1 |= y.p0 >> longlongbitsminus1;
y.p0 *= 2;



// get choice leading to shorter. Long is bad
if (x.rh > y.rh) {
x = y;

// get choice leading to less max entropy per level. Big is bad
} else if ((x.rh == y.rh) && (x.e > y.e)) {
x = y;
}

return x;

}

return x;
}

int roundsform_heuristics(double m) {
double room = 10.; // 10 rounds more than expected allowed

// m160bit ..  84rounds ~~ +10
// m256bit .. 133rounds ~~ +10

return (int)((m * 0.510416667) + 2.333333333 + room);
}

int main(int argc, char *argv[])
{
// get machine word
longlongbitsminus1 = ((8*sizeof(unsigned long long))-1);
printf("LLsizem1:%Lu\n",longlongbitsminus1);


  int h;  h = 0;



//  mpz_out_str(stdout, 10, b);  putchar('\n');




struct res r;





long double m = 128.0;
long double md = 8.0;


int burstd = 0;
int burst = 15;

double elimit = 0.;
double elimitd = 0.;

// load the command line parameters
if (argc == 7) {
sscanf(argv[1], "%Lf", &m);
sscanf(argv[2], "%Lf", &md);

sscanf(argv[3], "%i", &burst);
sscanf(argv[4], "%i", &burstd);

sscanf(argv[5], "%lf", &elimit);
sscanf(argv[6], "%lf", &elimitd);
}


// predict rounds using heuristic
h = roundsform_heuristics(m);
h = 0xffff;



printf("\nINPUT: Digest M=%Lf; Mdelta=%Lf ; HeuriMaxHashes X=%i ;"
"  BURST=%i Bdelta=%i ; logLimit %lf logLdelta %lf \n\n",m,md,h,burst, burstd, elimit, elimitd);



// solve slightly different many times over the night
int i;
for (i = 0; i < 1000;i++) {


  mpz_t b;  mpz_init(b);
  mpz_set_str(b, "4", 10); // the 10 represents the radix

// real run
// oracle.rh = SOLUTIONNOTFOUND;
maxentropyperlevel = elimit;
burstdefault = burst;
messagedigest = m;
r = findt(h,b,0,0,0,0);
// oracle = r;

// // also verify
// findt(h,b,0,0,0,0);


mpz_clear(b);


if (r.rh == SOLUTIONNOTFOUND) {
printf("SOLUTION WAS NOT FOUND PROBABLY BECAUSE HEURISTIC LIMIT IS TOO LOW: %i\n ", h);
printf("try giving more room to the heuristics\n");

return 0;
}

// print the solution
printf("BITS of Message Digest: %Lf SHORTEST HASHES: %i, burst=%i , e=%lf , PATH",m, r.rh, burstdefault, r.e);

// print the algorithm bitmap (solution)
if (r.p2 == 0) {
if (r.p1 == 0) {
printf("  %Lx | %Lx %Lx %Lx\n\n\n", r.p0, r.p2, r.p1, r.p0);
} else {
printf("  %Lx%Lx | %Lx %Lx %Lx\n\n\n", r.p1, r.p0, r.p2, r.p1, r.p0);
}} else {
printf("  %Lx%Lx%Lx | %Lx %Lx %Lx\n\n\n", r.p2, r.p1, r.p0, r.p2, r.p1, r.p0);
}

if ((md == 0.) && (burstd == 0) && (elimitd == 0.)) {

return 0;
}


// solve slightly different again

m += md;
if (h != 0xffff) {
h = roundsform_heuristics(m);
}
burst += burstd;
elimit += elimitd;
}

  return 0;

}

CIYAM
Legendary
*
Offline Offline

Activity: 1890
Merit: 1078


Ian Knowles - CIYAM Lead Developer


View Profile WWW
January 24, 2016, 03:56:08 PM
 #46

adam back releases 2mb segwit, (fully compatible and communicates to lukejr, YOU, jeff, greg)

would that be a hostile takeover, would it be preventing anything in adam backs roadmap ??

You seem to imply that we *need* the 2MB blocks ASAP - yet the evidence for that is non-existent (of course it is being pushed by supporters of those trying to rest control of the project from Bitcoin Core).

If Bitcoin Core ends up supporting 2MB blocks it will only be to stop Gavin and others from taking over the project.

At the end of the day they might be forced into doing this but I really don't think that this is a sensible way forward.


With CIYAM anyone can create 100% generated C++ web applications in literally minutes.

GPG Public Key | 1ciyam3htJit1feGa26p2wQ4aw6KFTejU
bargainbin
Full Member
***
Offline Offline

Activity: 126
Merit: 100



View Profile
January 24, 2016, 03:59:31 PM
 #47

They can, but they don't. What's your point?

Then blame the hashers rather than the pools.

As there are far more hashers than there are pools you'd think that the hashers would perhaps care more about decentralising - but if they don't then there isn't much you can do to fix it.

I'm not "blaming" anyone, I'm merely describing the way things are, i.e. centralized.
Claiming that a higher blocksize limit would *increase* centralization is ludicrous on many levels: both illogical (negatively impacts China, where most hashpower lives), and irrelevant (who cares if it's 9 0r 6 bros, they're all buddies anyhow).
franky1
Legendary
*
Offline Offline

Activity: 4256
Merit: 4522



View Profile
January 24, 2016, 04:01:10 PM
 #48

adam back releases 2mb segwit, (fully compatible and communicates to lukejr, YOU, jeff, greg)

would that be a hostile takeover, would it be preventing anything in adam backs roadmap ??

You seem to imply that we *need* the 2MB blocks ASAP - yet the evidence for that is non-existent (of course it is being pushed by supporters of trying to rest control of the project from Bitcoin Core).

If Bitcoin Core ends up supporting 2MB blocks it will only be to stop Gavin and others from taking over the project.

At the end of the day they might be forced into doing this but I really don't think that this is a sensible way forward.



blah blah blah.. more rhetoric about blaming banker motives..
i already said a few posts ago,, classic and xt wont be downloaded and ignore the whole gavin/banker motives

is there a hostile take over if all the developers of blockstream and many people not related to bankers all wanted 2mb.. and they all released independant versions that were all clean code and able to talk to eachother.. thus making the motives redundant..

please limit your reply to only talking about the general community of 3million people who do not have banking motives.. who just want some buffer space in blocks, instead of this crappy high fee priority shit due to lack of buffer space

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
watashi-kokoto (OP)
Sr. Member
****
Offline Offline

Activity: 682
Merit: 269



View Profile
January 24, 2016, 04:01:24 PM
 #49


I'm not "blaming" anyone, I'm merely describing the way things are, i.e. centralized.
Claiming that a higher blocksize limit would *increase* centralization is ludicrous on many levels: both illogical (negatively impacts China, where most hashpower lives), and irrelevant (who cares if it's 9 0r 6 bros, they're all buddies anyhow).

Satoshi Nakamoto is here in my living room and he disagrees with you Grin
CIYAM
Legendary
*
Offline Offline

Activity: 1890
Merit: 1078


Ian Knowles - CIYAM Lead Developer


View Profile WWW
January 24, 2016, 04:02:53 PM
 #50

Claiming that a higher blocksize limit would *increase* centralization is ludicrous on many levels: both illogical (negatively impacts China, where most hashpower lives), and irrelevant (who cares if it's 9 0r 6 bros, they're all buddies anyhow).

It is not ludicrous at all - in order to verify a block (as a full node) you need to verify every single signature.

Those operations are not so cheap (my current laptop is actually unable to even keep up with the blockchain because it is simply not fast enough).

So the more signature verification operations that are required (which is what you get with bigger blocks) the more potential nodes you are going to lose (as they will simply be unable to keep up).

Is this so hard to understand?

With CIYAM anyone can create 100% generated C++ web applications in literally minutes.

GPG Public Key | 1ciyam3htJit1feGa26p2wQ4aw6KFTejU
CIYAM
Legendary
*
Offline Offline

Activity: 1890
Merit: 1078


Ian Knowles - CIYAM Lead Developer


View Profile WWW
January 24, 2016, 04:04:31 PM
 #51

is there a hostile take over if all the developers of blockstream and many people not related to bankers all wanted 2mb..

Of course not - but this is simply not what is happening - so a rather pointless hypothetical.

With CIYAM anyone can create 100% generated C++ web applications in literally minutes.

GPG Public Key | 1ciyam3htJit1feGa26p2wQ4aw6KFTejU
franky1
Legendary
*
Offline Offline

Activity: 4256
Merit: 4522



View Profile
January 24, 2016, 04:06:25 PM
 #52

is there a hostile take over if all the developers of blockstream and many people not related to bankers all wanted 2mb..

Of course not - but this is simply not what is happening - so a rather pointless hypothetical.


so by that logic.. your only negative of 2mb is the motives of bankers.. and if we dissolved that threat.. you would be happy for a 2mb increase? as it wontdestroy bitcoin

oh and dont rant about
all blocks will be 1.999mb and the chain will bloat up by 104gb a year
or that miners will take 2x the time to process transactions.
or that bandwidth wont cope.

as miners will not be throwing 1.999mb blocks into the ecosystem right from inception.. they would start slowly and increment up when comfortable.. just like in 2013

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
bargainbin
Full Member
***
Offline Offline

Activity: 126
Merit: 100



View Profile
January 24, 2016, 04:07:14 PM
 #53


I'm not "blaming" anyone, I'm merely describing the way things are, i.e. centralized.
Claiming that a higher blocksize limit would *increase* centralization is ludicrous on many levels: both illogical (negatively impacts China, where most hashpower lives), and irrelevant (who cares if it's 9 0r 6 bros, they're all buddies anyhow).

Satoshi Nakamoto is here in my living room and he disagrees with you Grin
Shocked
Then the gentlemen touching my butt is an importer?! He seemed so honest...
watashi-kokoto (OP)
Sr. Member
****
Offline Offline

Activity: 682
Merit: 269



View Profile
January 24, 2016, 04:09:09 PM
 #54


Then the gentlemen touching my butt is an importer?! He seemed so honest...

You seem stressed. Don't you want to visit the restroom? Grin
bargainbin
Full Member
***
Offline Offline

Activity: 126
Merit: 100



View Profile
January 24, 2016, 04:09:19 PM
 #55

Claiming that a higher blocksize limit would *increase* centralization is ludicrous on many levels: both illogical (negatively impacts China, where most hashpower lives), and irrelevant (who cares if it's 9 0r 6 bros, they're all buddies anyhow).

It is not ludicrous at all - in order to verify a block (as a full node) you need to verify every single signature.

Those operations are not so cheap (my current laptop is actually unable to even keep up with the blockchain because it is simply not fast enough).

So the more signature verification operations that are required (which is what you get with bigger blocks) the more potential nodes you are going to lose (as they will simply be unable to keep up).

Is this so hard to understand?

Sorry, why should I be concerned about potential non-mining nodes in China again?
watashi-kokoto (OP)
Sr. Member
****
Offline Offline

Activity: 682
Merit: 269



View Profile
January 24, 2016, 04:11:50 PM
 #56

Can't stump the Trump.
Can't crash the p2p electronic cash.

Grin
pogress
Member
**
Offline Offline

Activity: 96
Merit: 10


View Profile
January 24, 2016, 04:19:23 PM
 #57

adam back releases 2mb segwit, (fully compatible and communicates to lukejr, YOU, jeff, greg)

would that be a hostile takeover, would it be preventing anything in adam backs roadmap ??

You seem to imply that we *need* the 2MB blocks ASAP - yet the evidence for that is non-existent (of course it is being pushed by supporters of those trying to rest control of the project from Bitcoin Core).

If Bitcoin Core ends up supporting 2MB blocks it will only be to stop Gavin and others from taking over the project.

At the end of the day they might be forced into doing this but I really don't think that this is a sensible way forward.


BlockStream <> Bitcoin. I understand BlockStream will try to do whatever to keep control over Bitcoin, but giving control to just one group is not in Bitcoin interest. Free market has to decide, not just BlockStream and their flawed vision of Bitcoin as just settlement layer with blocksize limit artifficaly restricted resulting in high fees and demand for other off chain BlockStream services - no thank you, it is the opposite vision of original Bitcoin project and BlockStream and their Bitcoin Core puppet has no right to monopolize Bitcoin.
watashi-kokoto (OP)
Sr. Member
****
Offline Offline

Activity: 682
Merit: 269



View Profile
January 24, 2016, 04:21:21 PM
 #58

This thread seems to attract a people with a certain kind of opinion. I wonder what may be the reason.
Bitcoinpro
Legendary
*
Offline Offline

Activity: 1344
Merit: 1000



View Profile
January 24, 2016, 04:27:39 PM
 #59

adam back releases 2mb segwit, (fully compatible and communicates to lukejr, YOU, jeff, greg)

would that be a hostile takeover, would it be preventing anything in adam backs roadmap ??

You seem to imply that we *need* the 2MB blocks ASAP - yet the evidence for that is non-existent (of course it is being pushed by supporters of trying to rest control of the project from Bitcoin Core).

If Bitcoin Core ends up supporting 2MB blocks it will only be to stop Gavin and others from taking over the project.

At the end of the day they might be forced into doing this but I really don't think that this is a sensible way forward.



blah blah blah.. more rhetoric about blaming banker motives..
i already said a few posts ago,, classic and xt wont be downloaded and ignore the whole gavin/banker motives

is there a hostile take over if all the developers of blockstream and many people not related to bankers all wanted 2mb.. and they all released independant versions that were all clean code and able to talk to eachother.. thus making the motives redundant..

please limit your reply to only talking about the general community of 3million people who do not have banking motives.. who just want some buffer space in blocks, instead of this crappy high fee priority shit due to lack of buffer space


so the bankers dont want to be known as bankers anymore lets call them tulips then

WWW.FACEBOOK.COM

CRYPTOCURRENCY CENTRAL BANK

LTC: LP7bcFENVL9vdmUVea1M6FMyjSmUfsMVYf
watashi-kokoto (OP)
Sr. Member
****
Offline Offline

Activity: 682
Merit: 269



View Profile
January 24, 2016, 04:33:06 PM
 #60

tulips seems to be stressed Grin
Pages: « 1 2 [3] 4 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!