addnode=144.76.139.111 addnode=23.92.86.148 addnode=68.3.131.174 addnode=77.237.251.163 addnode=92.222.27.45
@ 15218
even with those nodes my wallets won't sync past 15158 !! !$^#$! ok, try this one : connect=148.251.7.102 i did right after you posted it, still 15158.. maybe i have to give it some time.. What does the debug.log say, connected ? That node is currently syncing at least 20 other nodes.. i tried it with just connect=148.251.7.102 then it wants to go for other nodes, things like this trying connection 99.128.198.115:41682 lastseen=-39.3hrs connection timeout trying connection 98.225.220.50:41682 lastseen=-3.1hrs i tried it by removing addr.dat and setting only that address in the conf, getting same errors and things like this. socket recv error 10054 disconnecting node 81.164.5.200:41682 trying connection 185.21.191.139:41682 lastseen=-0.1hrs connected 185.21.191.139:41682 Added time data, samples 4, offset -4 (+0 minutes) version message: version 60003, blocks=10616 trying connection 60.190.103.126:41682 lastseen=-43.1hrs Added 5 addresses from 185.21.191.139: 24 tried, 3024 new Other nodes are connecting to you and fucking everything up. Add port=19191 to conf file and try again. OR listen=0 while you sync up
|
|
|
addnode=144.76.139.111 addnode=23.92.86.148 addnode=68.3.131.174 addnode=77.237.251.163 addnode=92.222.27.45
@ 15218
even with those nodes my wallets won't sync past 15158 !! !$^#$! ok, try this one : connect=148.251.7.102 i did right after you posted it, still 15158.. maybe i have to give it some time.. What does the debug.log say, connected ? That node is currently syncing at least 20 other nodes..
|
|
|
addnode=144.76.139.111 addnode=23.92.86.148 addnode=68.3.131.174 addnode=77.237.251.163 addnode=92.222.27.45
@ 15218
even with those nodes my wallets won't sync past 15158 !! !$^#$! ok, try this one : connect=148.251.7.102
|
|
|
addnode=144.76.139.111 addnode=23.92.86.148 addnode=68.3.131.174 addnode=77.237.251.163 addnode=92.222.27.45
@ 15218
I am sending my hash to your pool to avoid forks, make sure you fix the problem with the block reorganization, i dont see those blocks marked as orphans in the stats ? They will be marked orphan after 30 rpc calls, this is just to prevent fake orphans. Great !
|
|
|
addnode=144.76.139.111 addnode=23.92.86.148 addnode=68.3.131.174 addnode=77.237.251.163 addnode=92.222.27.45
@ 15218
I am sending my hash to your pool to avoid forks, make sure you fix the problem with the block reorganization, i dont see those blocks marked as orphans in the stats ?
|
|
|
We are on a fork again............
|
|
|
I have now been kicked back to "blocks" : 15158,
so is the pool This kicking back business is keeping us all on the same chain? Yes and we keep going back to the same block, ok lets give it one more try, if we go back again than its unrepairable
|
|
|
Ok this is shiiiiit, can we somehow centralize the network untill the wallet is fixed ? Dev, give fucking input on how to set up ONE node to run the whole network !! Set a node yourself!!
|
|
|
I have now been kicked back to "blocks" : 15158,
|
|
|
sandor, do : netstat -tn|grep 41682|awk -F" " '{print echo "addnode="$5}'
and then remove your IPs and remove :XXXXX
post the list here so people can connect
|
|
|
addnode=76.127.202.17 addnode=112.113.96.138 addnode=54.201.24.132 addnode=112.113.96.138 addnode=66.228.46.11 addnode=63.141.232.170 addnode=92.222.27.45 addnode=65.94.75.10 addnode=50.116.56.210 addnode=162.220.242.52 addnode=218.18.106.89 addnode=62.219.98.129 addnode=84.228.247.33 addnode=193.172.16.34 addnode=117.89.180.1 addnode=80.220.73.247 addnode=180.102.194.231 addnode=75.155.179.182 addnode=77.237.251.163 addnode=62.219.98.129
These are some of the nodes connected to my server. Use them, some will work (some have listen=1 and default port)!
|
|
|
I am now on { "blocks" : 15155, "currentblocksize" : 1000, "currentblocktx" : 0, "difficulty" : 0.00134539, "errors" : "", "generate" : true, "genproclimit" : 0, "hashespersec" : 0, "networkghps" : 0.00006719, "pooledtx" : 2, "testnet" : false
|
|
|
Why is this happening...... unreal... we started with one node, mine, as master node. why does it split after 30 mins or so...
Probably because we have 2 pools with massive hashrate mining and the chains get out of sync pretty quickly with low diff, I guess this is a good thing. Is the other pool even synced ? Or are you talking my servers = pool ? We have actually three. Two big pools and primer-'s farm, which quite as big as others 2 pools. Haha you are funny, i have a measely 100khash... Ok i have now killed all my nodes that are not synced with slimpools
|
|
|
current block : "blocks" : 15152,
|
|
|
Why is this happening...... unreal... we started with one node, mine, as master node. why does it split after 30 mins or so...
Probably because we have 2 pools with massive hashrate mining and the chains get out of sync pretty quickly with low diff, I guess this is a good thing. Is the other pool even synced ? Or are you talking my servers = pool ?
|
|
|
Ok lets hope it does not revert back anymore..
|
|
|
Why is this happening...... unreal... we started with one node, mine, as master node. why does it split after 30 mins or so...
|
|
|
I just cant get the all synced..... I;ve got 10 nodes at the current, 15211 and 10 at 15139... now all at 15139...
|
|
|
i'm now synced to 15113. Can I now use the addnodes from the OP? or will I get desynced again??
wait..
|
|
|
strange...it now shows 0 connections but still gets blocks. 15112 is shown as max.
My servers are bombarded with connections, they are starting to get de-synced. Please wait while i spread the load otherwise its all going to crash
|
|
|
|