Bitcoin Forum
May 28, 2024, 07:05:44 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 [4] 5 »
61  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN] ParallelCoin - DUO - SHA256 + Scrypt | Community Takeover on: December 07, 2018, 03:32:14 PM
Just a short little note to update on progress. For the last week, in between the many distractions that have come all at once, I have been developing an improved difficulty adjustment algorithm. The existing one is very prone to oscillation, a problem further compounded by the volatility of hashpower being pointed at the chain.

The new algorithm uses a fairly simple cubic curve that is more or less just, a cubic curve, calibrated to cross at 1,1. It uses the curve against the variance measured by a fixed number of past blocks. The old algorithm attempted to separate them and walks backwards through the chain to gather the last 10 per algorithm, and  then uses a flat linear adjustment, which is basically y=x, or  in other words, if the average time is under by 20%, it will reduce the target by 20%. This is a terrible way to do it, as it increasingly encounters aliasing distortion close to the target caused by the 1 second granularity of the block timestamps.

https://github.com/parallelcointeam/pod/blob/master/docs/parabolic_filter_difficulty_adjustment.md

Two main strategies are used as described in the document above, one is the cubic curve, and the other is that after the adjustment is made against the curve, the last two bits of the 'Bits' are flipped, which ensures that the target is unlikely to fall into a resonant cascade and getting stuck oscillating between long and short blocks over - in this case, typically 4-12 hour periods. This kind of extreme variance is very problematic for users because they literally cannot know when even the first confirmation will come in, better than 'about maybe half a day'. Intentionally adding noise to reduce these kinds of interference and distortion are well proven especially in radio and digital signalling technology in general, it is used by audio devices to reduce unintentional frequencies caused by the samplerate, it is used in most modern IPS and better LCD displays (they used to just do a checkerboard, the fuzz is much nicer on the eyes).

Oh, and one last thing, it does not filter out algorithms from the computation. The minimum target for scrypt blocks on the parallelcoin network is currently the smallest possible value, 7 at the front then lots of f's. Because there isn't many miners using this algorithm, at times the difficulty *will* drop to zero and *will* mean 5-10 blocks can be spewed forth suddenly. Being that mining is supposed to be about the transactions, and reward for doing that work, the incentives are all upside down. So the new algorithm only looks for the most recent one, and uses all of the previous blocks no matter which difficulty - the past difficulty adjustments are irrelevant, what is important is the timestamps. By not distinguishing between algorithms in this computation the block rhythm should be better regulated.

I am basically satisfied with the workings of the new adjustment now, and as you can probably imagine, it's more about staring and watching grass grow than actually coding, so I'm glad and just had to draw a line in the sand about how long I would fiddle with it. There is probably significant improvements possible, but they would be pushing the boundary of diminishing returns. As it stands, the variance from the target is typically held within 20%, and the dithering helps to ensure that strings of short blocks happen far less frequently, now it appears to be the case that more often they come in alternating or sometimes 2, and less frequently, 3 blocks at a time. Also, the very wide variance caused when pool miners dive bomb the chain will be greatly diminished, as such short blocks trigger very big difficulty adjustments, and instead of the current spates of 5-10 blocks under 10 seconds between, it is unlikely that more than 3 will usually happen so quickly, and hopefully they hang around long enough to get a few more at which point the timing will be more normal.

Now it will get a *little* more interesting for me. I have to get the scrypt CPU mining working properly again, I think the two RPC ports properly produce block templates, but this will be checked and fixed if it is not. Then I will be adding more proof of work algorithms. The main thing that will determine what makes the cut for the first hard fork will be that the hash functions are in a Go package. Most of them are covered. I am unsure which, at this point, and I recall looking at Ethash and thinking it would be a huge pain, but Equihash, Cryptonote 7, possibly Cuckoo, x11, and maybe some others like Myriad, Keccac, Skunk and similar. I will aim to ensure that miners with idle GPU capacity can help secure the network against hashpower attacks. Adding merged mining is a considerable undertaking, and if it is going to be done it will be in the second hard fork, if it seems necessary to do this.
62  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN] ParallelCoin - DUO - SHA256 + Scrypt | Community Takeover on: November 29, 2018, 07:28:07 AM
Just a minor update regarding planned hardfork changes...

Parallelcoin suffers from a problem with its' difficulty adjustment because of how it is computer. It bases the difficulty adjustment of the measurement of the time between the latest and 9 previous blocks of the same algorithm. This is absurdly short, and typically ranges between 50 and 100 minutes. As such, it is highly vulnerable to what I call a 'rhythmic hashpower attack'. For this, I have two main strategies:

  • A long number of blocks, 4032, being approximately 2 weeks at 5 minute blocks, randomly 50% are selected, the top and bottom 12.5% are removed and the average is computed from this, eliminating sequence and outliers causing the mean to be inaccurately representative of block times. This is to ensure that no type of sequence or rhythm can control the adjustment, and it uses a much longer timescale so it also does not fluctuate wildly up and down chasing the impossible regularity of a poisson process, and so short periods of high hash power do not move the mean above the actual mean.
  • The weights of the 4032 past blocks are added and at the mean of the block weight, difficulty is further adjusted up or down based on a new block's proportion to this average. Below this value difficulty is nominal, above this, the nominal difficulty is multiplied by the factor above the average, and then squared. This does not affect the nominal difficulty, its purpose is to discourage processing all of the blocks when they bank up in one block, meaning more miners get a share of the block fees.

The second part is based on the method used by Freshcoin, which mines most of the supply early. This coin doesn't do that, but it instead has the problem that most blocks have no transactions put in them and they very often are spewed out at 10 at a time within a minute. By raising the difficulty required according to over-average numbers of transactions in a block, it ensures that these high powered pools are not able to deny the loyal, smaller mining pools and solo miners from getting blocks. Possibly I may also consider that below the mean block weight the difficulty scales downwards, meaning that even maybe solo miners have a decent chance of being able to clear one transaction.
63  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN] ParallelCoin - DUO - SHA256 + Scrypt | Community Takeover on: November 24, 2018, 10:54:33 PM
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

As you can all see from @traxor's updates, I have been very very busy.

This morning I finished up with getting the wallet daemon (mod - because it's modular, and that's a nice short name). The new full node and wallet are based on btcsuite btcd and btcwallet, with all the necessary changes for it to conform to the existing consensus.

They can be found here: https://github.com/parallelcointeam/pod and https://github.com/parallelcointeam/mod I will be making a full release with installation packages for Windows, Mac, Debian/Ubuntu, Redhat/Suse and I have an account at the arch linux AUR and I will make a PKGBUILD for it also, and it will be available through the AUR (I'll make a binary, version pinned build and a rolling master branch build).

The new full node runs two separate RPCs for miners and pools to use, defaults to 11048 for sha256d and 11049 for scrypt. The full node, CLI controller and wallet daemon all autoconfigure for localhost access and integrates the full node and wallet automatically, both `podctl` and `mod` on first run with no configuration will copy the RPC credentials so you can get up and running quick and easy. The wallet daemon listens by default to port 11046

It got quite exciting a few weeks back as somebody with access to a lot of mining power started to put out blocks with different version numbers, which it turns out the old client treats as sha256d blocks. Then they started using 'Big S' signatures, which are part of the hypothetical signature malleability double spend attack - and then sure enough, logs started showing blocks coming in that were rejected because the outputs were already spent. The latest seems to be that they are trying to mine a side chain, but so far it looks like the rest of the miners have enough hash power to keep the head at the front.

Our secret shopper, whoever they are, or maybe they are just a legitimate Wink would-be-blockchain-robber, has ensured that it will be necessary to upgrade the protocol significantly. These are the changes I have in mind:

- - The new client will automatically set the equivalent of a checkpoint at about 288 (2 days) blocks, so that it will automatically block attempts to create side chains and push them ahead with a 51% attack.

- - The difficulty adjustment needs to work from a longer timespan than 50 minutes. I will set it to 4032 blocks, approximately 2 weeks. This is to address the problem that short term bursts of greatly increased hash power is leading to miners printing 10 blocks within a minute and then nothing for hours, which is entirely unsuitable for a real world transactional currency. Extending the averaging window will mean that the difficulty adjusts more slowly and can't be pushed up artificially by timed bursts.

- - Block versions have been messed up by the secret shopper, but after the prescribed hardfork block height they will again be properly enforced and I will design an extensible protocol similar to the one in bitcoin, that allows us to in the future upgrade the protocol using BIP9 softforks.

- - The peer to peer and RPC protocols will be enhanced with a messsagepack binary format protocol and use the OTR/Diffie Hellman Perfect Forward Secrecy encryption protocol, as it is patently retarded to use OpenSSL/GNUTLS, which is based on a web of trust model - which means you have to faff about with certificate authorities to just get it connected. The mechanism will be extended to enable users to make black or whitelists. Web of trust protocols are not identity-secure, as you sign public keys that afterwards the signed keyholder will advertise therefore your approval of them. There might be some purpose to this but I can't think of it. This encryption protocol will also be added to the peer to peer protocol, because in my opinion, even though it's public information, the propagation pattern should be protected at least a little bit, as locations enable attacks.

- - I am planning an SPV wallet, which will have native compiled GUI interface available. The SPV wallet is the testbed and initial showcase/prototype for the peer to peer network extensions I have planned.

- - The first application of this will be in the implementation of a transaction proxy relaying, like the outbound connection component of the Tor network. The nodes will all have a unique identifying EC keypair (ed25519, natch) which will be constructing a wrapper around its transaction broadcast with three layers designating the 3 peers that will relay the message. Each node will only know where it came from and where it's going but not whether it is the first, second or last hop before the destination node.

- - From there on, there will be built wrappers and sockets and interfacing libraries that can be used by other applications to connect to the network and use it to find specific subprotocol peers to enable other types of distributed applications, both centralised/federated type applications (eg, diaspora, paxos/raft/sporedb and other federated WoT database protocols) and trust-computed (for example like the Steem reputation system) and trustless (yes, potentially allowing multiple cryptocurrencies to interface directly through the network) peer to peer protocols. In other words, I am aiming to make the parallelcoin network the wires to connect a whole ecosystem of applications.

When I have completed the releases, with all the installers made for all of the above mentioned platforms (and if anyone has specific requests) - and already you can see on the newest release that @trax0r posted, there is literally binaries for almost every platform in existence, I will make a new ANN thread for discussion and feedback.
-----BEGIN PGP SIGNATURE-----

iHUEARYIAB0WIQThc/kXLToA5xCfIuOCA/USO9KcBAUCW/nWowAKCRCCA/USO9Kc
BJdXAQD/+BxK5stddGlA1InaDDrF6GI74Dfqz62E2Qu5E7mqCgEApeF9r1SRvS4c
rveVObyPNZ5DJNMIkMVlpsRT0rZIhA0=
=EmXt
-----END PGP SIGNATURE-----
64  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN] ParallelCoin - DUO - SHA256 + Scrypt | Community Takeover on: June 14, 2018, 06:34:55 AM
The discord is here:

https://discord.gg/nJKts94

Heh, you can all point at laugh at my folly with the 'no block sooner than 150 seconds' idea now... It made the chain fork very quickly Smiley More orphans than a Charles Dickens novel.

I am going to test just raising the difficulty adjustment window so it averages over a longer period next.
65  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN] ParallelCoin - DUO - SHA256 + Scrypt | Community Takeover on: June 13, 2018, 12:24:29 PM
I have started on fixing the difficulty adjustment. It amazes me that both bitcoin and litecoin and many other early coins have no defences against a very wide variance in hashpower over time. Once a block comes in at a very high difficulty, the only time it adjusts the difficulty down is when another block, which must be as high as the previous one according to the difficulty. Essentially it averages the last 10, but it can only bump the difficulty up by 2% each time a new block comes in so the attacker can just show up for 5-10 minutes a day, bomb the chain with blocks, and know that likely at least 3 hours before anyone else on the network might find a solution as high as the difficulty has got. I think this is known as the 'instamine' problem.

So, I have been busy searching through the code, following the execution path from starting, and located the place where difficulty is adjusted and where I can potentially add a timer triggered event that I set to exactly 5 minutes, and then it sets a new one for every minute afterwards, that recalculates and propagates a difficulty change.

The other measure I think is required is to massively accelerate the rate at which difficulty rises in response to blocks less than 1 minute apart. These adjustments really should go up fast, I think 4% at the second block in 10 seconds, and each subsequent block less than a minute apart the difficulty adjustment doubles, so next block under 1 minute pushes the difficulty up 8%, the 4th will trigger 16%, 5th at 32%. This should at least slow the second block to the root of the interval between (so if it was 4 seconds between the next will be likely at least 16, after that 256, and then by this time we are at close to a normal block period).

This adjusts to the massive jump in hashpower quickly, and then after that the previous timer triggered event will lower the difficulty by 2% every minute past 5 minutes time. It should lower to actual available hashpower within 20-30 minutes which will be long enough to at least double if not quadruple the cost of this attacker compared to their profit. They would need to keep mining at least for half an hour to get more than 3 or 4 blocks, and they don't even seem to hang around longer than about 5 minutes. This makes sense since this is the averaging period and after 20 blocks it has risen by 1.02^10 at least, 21% higher, but the longer the gap the more likely the chain is to adjust downwards.

Essentially the chain doesn't currently have a strategy to deal with a sharp decrease in block time and this means that the target is woefully incorrect most of the time. When being attacked, it isn't high enough, when the attack is over, it has no way to adjust back to normal hashpower.

In fact, another measure could be an option, to simply disallow any block arriving less than 1 minute since the previous. This should not be based on the block timestamp, but instead simply the accept block function can be set to reject anything sooner than 60 seconds since the last and just drop the block. This might be even simpler to implement than adding a timer triggered event. All the necessary variables are already in scope where I would add it. There is absolutely no benefit to blocks arriving closer than 30 seconds anyway, and one minute is a reasonable minimum.

The process by which servers in a distributed network decide what to accept and what to reject has a built in subjectivity. The transactions do not propagate instantly to all nodes at once, every node has a different memory pool, most of the time, to any other, at any given moment in time. By disallowing a subjective 1 minute time limit after a block that any block arriving before 60 seconds after the last was received, we are not trusting the timestamps on the blocks, which can be post-dated by some period (I think with bitcoin it's an hour either way), but rather just how the blocks propagate. If a block comes in close to this boundary, it may be that say 25% of the nodes reject the block if it arrived sooner for them than the previous arrived, the block will still succeed because most nodes didn't see it until after they saw the previous one.

Anyway, I need to set some agenda items here in the Lab, hopefully I will see the boss soon.
66  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN] ParallelCoin - DUO - SHA256 + Scrypt | Community Takeover on: June 12, 2018, 07:37:29 AM
Indeed. It's not just bad for mining it's also bad for transactions. I really have to fix it before moving on to the new version is written.

I've currently got a weird problem with the build, it's working 100% perfect on my home pc's system but the system I am running at the lab is not working properly for some strange reason.  I will be working on fixing this and adding to the consensus a lowering of difficulty and changing how the difficulty is computed. It needs to more aggressively raise difficulty when hashpower rises, maybe as much as 4 or even 8 times as aggressive, and after the space between blocks rises over about 20 minutes it should aggressively lower difficulty until a block is found.

The attacker on the chain is exploiting the fact the difficulty does not return to match the hashpower that is present most of the time, so they are monopolising blocks and causing them to come in a short burst every 3-12 hours. If we could stop them getting more than half of their blocks as they dump hashpower on the network, then slowly lower the difficulty, accelerating it, back down over the hour after these rises, the miner would either have to run on the network for longer, lowering their profit, or go elsewhere to exploit flaws in cryptos for profit.

I think I can probably make a solution for it, but it will also require us to get the exchanges and other services related to the coin to accept the change to the consensus.

I may have to add the merge mining before any new client is worked on also, to allow more hashpower to be on the coin and reduce the fluctuation that would be caused by coin-hoppers like the one we have on our chain.
67  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN] ParallelCoin - DUO - SHA256 + Scrypt | Community Takeover on: June 10, 2018, 10:09:06 PM
hehe... and the more I work with it the more things I understand, even if I still hate the evil C++ and the evil Boost. I didn't know if I would even manage to be able to fix the code to build with GCC 8 but it's all 100%.

Nothing yet requires getting any exchanges or pools switched over though. I think I should do some testing to tweak the parameters as much as I can and definitely I will look into adding something to the network tick to reset difficulty as it gets slow. If I can at least bring the rhythm back to the chain that would be a huge improvement. I will hold off on merge mining for the revamped client. Marcetin has pointed me at some other work that has been done in this direction, now I know there is at least 4 proof of work blockchain clients written in Go.

Since the changes are not really that big, I think we should be able to get cryptopia to upgrade their node with not too much persuasion. We will get a test net running and torture it for at least a week before we confirm it's ready.

As regards to forks, they can happen unintentionally. It took a lot of coordination by the Ethereum guys to make sure that geth and parity didn't desync, there was a big issue recently and the C++ client was at risk of causing the chain to fork because of problems in the code.
68  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN] ParallelCoin - DUO - SHA256 + Scrypt | Community Takeover on: June 10, 2018, 07:47:07 PM
At this point there is not actually a fork, simply I have updated the repository to build on current versions of things on a current linux system.

This repository below is now fixed to build on more or less any current version of linux. I don't know if the mac or windows builds work or not, I didn't touch them because I don't know anything about them.

https://github.com/ParallelCoinTeam/parallelcoin

It's not a new version, it's just a version that you can build on any linux. There is a script in src/linux-build.sh that will ease all the painy bits, building the server, qt wallet and dropping the necessaries into your home folder to add them to your favourite linux desktop application menu (for the Qt app, anyway).

For the technical details, what I did was embed openssl 1.0.1 into the repository (just so you don't have to Smiley and you need to install berkeleydb 4.8 and boost 1.59 beforehand. For linux users, this will give you the parallelcoin-qt GUI wallet and the daemon you can run how you want to. I fixed all the code so it all builds out of the 'src/' directory, both the daemon and Qt wallet, and it took a few changes here and there, many of which had already been made on other bitcoin-based clients previously (well, I had figured them out mostly beforehand by reading the errors).


I have some parameter changes for the chain worked out, 30 second blocks, lowered minimum difficulty to be reasonable for scrypt, and an increase in the maximum amount of difficulty adjustment when hashpower changes and causes blocks to break the block time pattern. 30 second blocks is an optional change, I am looking to hear arguments for and against. I think 1 minute would be enough, 30 seconds shaves it a bit close to the limits of blockchain systems.

These are changes that will cause a fork. These are changes we have to tee things up with pool operators and exchanges to get pushed through. I suspect that most reading this would be glad to have this update. Shorter blocks and a more aggressive difficulty adjustment will reduce the arhythmia the chain has developed significantly.

I am looking into how to distribute the new version better - I am looking at flatpak for the GUI, which greatly simplifies installation for casual users on Linux. We will need a mac and a windows person to check if the builds are still working for these platforms, I suspect they will, but the mac version in particular might take a little more work. I am attempting to get a static binary built now that will just run on any recent linux version without any installation, something we can put on the 'releases' page alongside the binaries for mac and windows. I think Marcetin might be able to work with me to get the mac version sorted out tomorrow or the next day.


I have diverged from the main mission of building a Golang based client for the network temporarily because I think current and recent builds should be available, and there is a small amount of tweaks I have made that I think will improve the performance of the network. We have some, one or a few cloud miners dumping hashpower sporadically on the network which is exposing a serious flaw in the protocol - it only adjusts difficulty when someone finds a solution at the elevated difficulty. The lack of scrypt pool and generally low level of hashpower on the network contribute to this but if I dig around a bit I may be able to figure out how to trigger difficulty adjustment checks in between blocks and have the difficulty drop fast enough between these attacks to let other users actually win blocks.

I have been getting so far into this that I might actually be able to make this change myself fairly easily, before the new client is written, and I probably can tune it so that it progressively lowers difficulty until it hits the surface of the available hashpower and gets 5 minute blocks regularly, even with such a wide difference between maximum and minimum hashpower on the network over time.

So, to set the agenda for the next bits of work the dev team will be doing, we need someone to help with getting the windows and mac versions compiling, we need to launch a scrypt pool. I will be getting a static binary for the server and qt wallet built in the next day or so, that hopefully I can easily turn into a flatpak or at least most people will be able to just dump in their executable path and use on any linux.

The one thing that would be really nice to be able to get happening is to integrate merge mining so we can get some free hash power to reduce the vulnerability of the chain, but I'm not sure what's involved with that exactly. I know several sha256d and scrypt coins based on bitcoin have this, so it might not be so difficult to integrate.

I just checked a static build I started, and I found that the upnp and berkeleydb libraries didn't have objects to link. I think I'll be able to fix that tomorrow and then I can push up binaries for the new 1.3 build. I bumped the version number because I think it's overdue and the parameter change version will then be 1.4... we'll be still two versions behind bitcoin and litecoin but whatever! The golang client will be 2.0, and it will be significantly better, and I will make it modular so we can add numerous other merge mineable algorithms to further reinforce the hashpower security of the system.
69  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN] ParallelCoin - DUO - SHA256 + Scrypt | Community Takeover on: June 08, 2018, 06:44:59 PM
I would like to introduce myself, since my boss has been so busy he didn't think of it - I am the new lead blockchain engineer/dev in the project now. My name is Loki.

I have been working on a golang version of Parallelcoin, based on Piotr Narewski's Gocoin, and I had just reached a point where the new client was sucking up blocks from the network, and I became aware there was some kind of problem with the network.

As a result of this bit of edification, I switched over for a couple of days to just messing with the ancient version as on Marcetin's repository, and I explored how the thing works between a couple of machines in my apartment (a slow old laptop and a desktop with an i5) and determined with many experiments and changes and staring at server logs, that there will be a number of pre-go-fork changes that will help encourage people to actually use this coin. I have these changes all ginned up on another repository and I've even worked up the required push but I wasn't given any access yet to the main repo so it's sitting idle on my hard drive.



1. Maximum percentage of difficulty adjustment *will be* doubled.

The reason for this is some lazy-ass bugger has taken to dropping in on the network and spewing blocks at it. The doubling of the difficulty adjustment will reduce the spewiness of their little farm of SHA256D miners a tad, well, by twice as much as it is now, per block. I considered making the adjustment more aggressive but then discovered the chain requires a block to arrive before it recomputes difficulty (more on this in a little). This will raise the cost of their slovenly grasping by about double, over the 10 or so blocks they typically grab, and maybe that will mean they go away. Well, it's not enough to make them go away, I think. The issue is simply that for hours, sometimes half a day afterwards, nobody else wins a block.

(This is partly you all's fault for not mining it with scrypt)

2. Minimum difficulty has been lowered. I dunno how many other blockchain engineers there are reading this, but scrypt is like 10-100x slower than sha256. On my i5, at minimum difficulty (bits=1e00ffff) I find a solution with 4 cores on average about every 3-7 minutes. This is way too high a minimum difficulty, considering nobody is actually mining it with scrypt.  Lowering it to this more reasonable level will mean that even just a few people running the node with -gen turned on will disrupt these long bouts of silence from the network. It's not a change that will affect sha256 mining, unless suddenly everyone turns off their miner, which is unlikely, although it seems like there's not many mining anyway.

3. We can change the chain to have 30 second blocks. Why not? Along with this tenfold reduction in block time, the block reward will also be reduced by the same factor, but you can get the same number of tokens in the same time, just that there will be more blocks involved.

I really wish I could quickly and as easily add a consensus rule that downgrades difficulty after the block window passes, but I am not really a C++ programmer and as a programmer in general, the cryptic nature of C++ is quite repulsive to me (I was hired to code in go, so, as expected). This is a change that will be first cab off the rank when I finish the gocoin fork. I simply had no idea how much of an abandoncoin, exactly, that I was dealing with, and it's amazing, and probably a testament to the loyalty of you all, that it's still not been completely delisted into nonexistence despite the glaring problems with the network.

Anyway, to give a teaser of what is planned other than such dry and dowdy things as the above (which are still pretty cool, really):

1. Masternodes - not staked! Simply, you will be able to get a share of the block reward for keeping the p2p network and make available the blockchain data to anyone who requests it. Anyone caught running a masternode that seems to be alive and serving data within a short time period will be eligible for winning the next block reward share. I am tentatively setting it at 10%, and the network will reject blocks that don't pay a masternode, or try to pay one other than the deterministically selected one (based on a hash of a specific recent block, maybe 20 back, maybe the head, I'm not sure yet).

Marcetin and I both agree that staking is an artificial incentive and market-manipulation attempt, so simply you will get paid to configure one of the new DUOD nodes to serve the chain up.

This is an interim step towards further enhancements, of course. This is a site in the architecture where I see an opportunity to start developing a SporeDB-based BFT type system. After the change is implemented, syncing to a new node will only require selecting a trusted node, and then it will slurp the whole chain directly in the form it is stored by the server (The gocoin fork squashes the data about 30%, btw, so not 128Mb but more like 90Mb right now) and the chain index, and no need to actually replay it. At your option - this is one of the reasons for doing this - you can always just sync the old fashioned way, but with this new feature, after about 150Mb of download, at most, right now, your node is up and running and answering queries. For most of you that means about 10 minutes. Replay takes about 90-120 minutes currently, and if these features make this a desirable cryptocurrency, that's indubitably going to get way worse in the next 2 years.

2. Progressive Web App GUI wallet delivered directly from your node. No more scratching around for a GUI wallet to use. Literally you will be able to point your browser at it, and tell chrome to install it as an app, and it will work also offline. The Gocoin codebase includes a fiddly CLI cold wallet as well, this will be beefed up to become a browser app also, so despite being offline, you will have the convenience of 30 years of WWW experience delivering you cold wallet functionality without forcing you to learn how to type. We likely will think about enhancing this app further to enable it to be the basis of a web wallet, that anyone can spin up in an hour and serve. I personally am wary of foreign code, so you can be sure that I will be making sure you know it's the legit, real version that you can read or pay someone to read the source of to assure you that it's not stealing your keys.

And that's just the beginning. It is Marcetin's plan also to leverage the power of this community towards growing the system to more than just a cryptocurrency. I have been working on a design for over a year now, for an extensible, modular, HARD (ie native) smart contract platform system that will include a democratically monetised forum, messaging system, distributed exchange network (including tools to help people bind in other cryptos), and the thing that is my most favourite, a gitlab type application that runs on the network that lets you get paid to code, and of course, a distributed marketplace. But we don't stop there, next stop after this is an anonymous routing system, the option to get paid for providing anonymous relay service, and eventually, the ability to launch a whole application on the network, that has the smarts to be able to run a multiplayer action game at sub 100ms latency in a massive (probably continent-bound) environment.
70  Bitcoin / Wallet software / Re: Gocoin - totally different bitcoin client with deterministic cold wallet on: June 04, 2018, 06:34:14 AM
Ok, I have returned to working on it... I got a full node of another chain running and set this node as sole peer of the altered prototype. Now it reports bad header formats and PoW, as it should since the coin I am altering the client to work with has two - and an extra byte in the block header to signify which PoW is used on a block.

I have to do some forensics on the blocks of the foreign chain, split the logic so there is two PoW difficulty consensus values and the recognition of valid PoWs, but I am making significant progress. I most likely will, at the end of this process, be able to precisely point at which parts have to be changed to cope with a different network, and which for different chains, I have the idea of shifting all those functions and settings into a separate library so it becomes possible to quickly target a new PoW coin using the Gocoin codebase.

https://github.com/ParallelCoinTeam/duod

There is a working docker version of the circa 2014 parallelcoin in there, likely you could easily substitute any other for this purpose. I set the configuration from the gocoin side to connect only to this docker instance. After it grabs 20 incorrectly formatted blocks it complains and stops syncing for a bit, tries again, complains again... If you changed which coin was built in this you could make it work, change genesis block info, PoW verification, difficulty adjustment policies, etc.
71  Bitcoin / Wallet software / Re: Gocoin - totally different bitcoin client with deterministic cold wallet on: May 31, 2018, 08:08:10 PM
I just wanted to report on my experiences forking. I actually searched through all the 100 other forks and nobody really did anything. So I basically really was doing something that had not been done before.

I was able to change the majority of chain parameters and use different DNS seeds but for a reason I have not yet been able to determine, the peers on the other network refuse to grant auth to my modified gocoin node. I mean, it is significantly different, there is two PoW types in the network I am trying to join it to. The gocoin client doesn't even report the IP addresses in the interface of the nodes it's trying to connect to in the network section. I added extensive trace logging to many functions and these revealed the IP addresses but to be honest I don't know for sure if they were even correct from the DNS seed.

I kinda had hoped that configuration would be more centralised but instead there is about 3 or maybe more different places in the source that have to be changed and those that I have identified, and changed to match the other network were not enough to get peers seeding blocks to my node.

It's not a complaint, per se, or a request or anything like that. It's just a very nice bitcoin client and I thought it would not be so difficult to make it talk to a network based very closely on bitcoin (parallelcoin) and really quite old, so old that building the binary is a maze of ancient dependencies.

I understand that you build this mainly for your own benefit, so your needs are necessarily quite different being a well seasoned blockchain dev. Anyway, it's a very nice thing you made, just seems to be a bit too different for me to be able to figure out how to get it to connect to a different but basically bitcoin-based network.
72  Alternate cryptocurrencies / Altcoin Discussion / Re: [ANN] Genesis Block Generator on: May 19, 2018, 06:32:48 AM
I made a port of this code into Golang for those who are interested. There is an error in the source of the miner at the end of the OP in that it doesn't correctly adjust the difficulty target search (it searches for only 4 zeroes at the big end instead of what nBits specifies, though this is the same as the minimum difficulty). I have fixed this, and it is precise to the bit. Possibly I could have made my job easier by converting the difficulty to big.Int and also the hash but the way I have written it correctly checks - I suspect that it would be equal or greater overhead to have to use the bytes to big.Int converter anyway.

https://github.com/calibrae-project/spawn/blob/e00ddd6ca93e1f96f69b75fd3b8536d98b45deb8/tools/genesis/create/creategenesis.go

It does take quite a long time to compute the hash. On my i5-7600 running on one core it takes between 30 and 60 minutes. Of course the solver I have written isn't going to win any awards but a CPU is not the right device to compute hashes on anyway as a sole task. I guess it might be a good opportunity for me to learn about how to write goroutines and split the task amongst all the cores of the cpu by dividing the nonce-space and on my machine that would drop the time to find the right nonce by a quarter.

After thinking about it and reading about goroutines I figured out that parallelising such a search was very simple and as my routine grabs the current time at the end of the nonce number space and I have set them to start one second apart, this should dramatically improve the time it takes for my genesis generator to finish. Especially if you are running this on a machine with 8 or more cores.

Here is the updated version with parallel execution of the search:

https://github.com/calibrae-project/spawn/blob/94977557fcf0ae653e5c1a0c831a42ff0240b905/tools/genesis/create/creategenesis.go

UPDATE:

I still had some things wrong and decided to set it to default to generating a random public key if no input was given. This now correctly targets the difficulty, the solutions generally have 3 bytes zero at the big end.

https://github.com/calibrae-project/spawn/blob/f682a9d7aa43a764682dec77cd6a50a0c6049537/tools/genesis/create/creategenesis.go

It pops out the solution so fast it's almost pointless that it fans out use all your CPU threads to perform the search.

Enjoy!
73  Bitcoin / Wallet software / Re: Gocoin - totally different bitcoin client with deterministic cold wallet on: May 15, 2018, 03:21:25 AM
Wow, 103 forks! Yeah, as a golang fiend your code was a breath of fresh air after so much tangled C++.

If there is money for the project my fork is building up from this base with, I will definitely contact you if you would be interested in helping maintain and expand the codebase of the fork and pay you for it, I am very impressed.

I have been studying the situation with proof of work and ASICs intensively lately and it's my opinion the problem is simply that nobody has been writing puzzles that require the strengths of CPU or GPU processors. The PoW for the fork of Gocoin will be one that is designed to require strong database processing capabilities, which is exactly what a CPU is good for (lists and trees). Also I am precisely following an issuance model that mimics precious metals.

Oh and yes, there will never be any registered Calibrae corporation. People will probably need to for setting up liquidity pools for the distributed exchange but that is peripheral and for this my project is intended to be a protocol for doing business, not a business in itself.
74  Other / Off-topic / Re: Hummingbird Proof of Work - a new design with a pretty diagram to explain it on: May 13, 2018, 08:34:02 PM
Cuckoo Cycle has a maximum of 8192 hashchain elements in a search. It uses a bucket sort to order them in order to find partial collisions, and as such, the amount of time taken for a solution is fairly uniform.

Hummingbird takes a different approach. The chains are much longer, the number space is much wider (at this point I think it will be adjacency lists of 64 bits with 32 bits per head/tail pair), it will be designed to consume a much larger amount of memory (upwards from 12Gb) and I am writing a binary tree algorithm based on the b-heap algorithm that is used in Varnish, except instead of vertical sorting it's lateral, but more generally the same principle, and designed to be very data-cache local so that much of the searching and sorting takes place inside the cache allowing enough time for memory retrieval, and by doing this, bypassing much of the advantage of GPU and potential ASIC acceleration, since most of the action takes place inside the CPU cache (and I have been reading about Ryzen Threadripper 1920x... so much want!).

So, solutions will be found more progressively by the solver, and because of the poisson distribution of any decent hash algorithm (I will be using Highwayhash), the difficulty adjustment will not aim at number sizes but instead numbers of elements in a cycle, and are encoded as the scalar of the hashchain distance from the seed nonce. This means verification will be a little more work than Cuckoo, Momentum or Primecoin's algorithm, but again, because of using Highwayhash, so long as nodes have AVX2, it should not be onerous. The work is about the search of the hashchain sequence, and the requirement of accessing a lot of memory, but with the tree algorithm, latency delay will not be a large component of what makes it 'work' but rather, having to explore such a large number space (likely to be in the range of 16-24 bits of address space to encode the scalars of the hashchain elements.

Because of it being such a large number field being searched, any possible strategy to accelerate the search would require memory faster than the tree algorithm can sort the elements within the cache, and basically this is impossible with an external front side bus, and in fact the fastest hardware to solve it would be a CPU with a bigger cache. This is not a type of chip that can be manufactured by small ASIC fabs, only CPU manufacturers have the equipment and economies of scale to print so much of this fast memory into a processor. Possibly actual memory manufacturers could build them but the point is that the tree algorithm, and the size of the hashchains, preclude any advantage from being greater than commodity DDR4+ memory, CPUs, to an even lesser degree than is the case with Cryptonote and Equihash, in terms of the economics - the cost of producing extremely large SRAM cells on die with a processor can't really be improved over what Intel and AMD already are doing. Furthermore, the tree algorithm I have designed should make it difficult to find significant accelerations in the process.

Of course this is all hypothetical and I am certainly not claiming that I even have understood this right but it makes sense in my mind anyway - that if you can shift a lot of the processing time inside the cache in a search process so that the process is rarely waiting for memory to arrive that it reduces the memory bus load as compared to how much of the search/sort/insert/delete depends on waiting for memory, that is something you simply cannot do without substantial sized caches. Of course the access pattern is not going to be any more deterministic, but walking the tree only requires pulling 3-4 memory pages per walk with the tree I have designed, compared to a brute force sort, which will overflow the cache constantly, moving the bottleneck to the front side bus.

If it works as I hope it does, it also may make the use of mmapped files viable, with swap, on fast flash storage and also reduce the differential between small, low power hardware and larger systems with large DRAM attached, in terms of the cost per solution capability (an ARM 64 bit processor has 4mb of cache, compared to an intel i5 6mb cache, meaning that if cache memory performance becomes a critical factor in solution discovery, these much cheaper chips may work out closer in capital outlay and power consumption as to basically produce a relatively uniform ratio. Or in simpler words, throwing more money at the hardware will not significantly increase the output of solutions, the central focus of ASIC development.
75  Bitcoin / Wallet software / Re: Gocoin - totally different bitcoin client with deterministic cold wallet on: May 13, 2018, 07:49:10 PM
Firstly, I just want to say, what a beautiful piece of software you have made Piotr! I never heard about it and I had already looked at btcd and as soon as I saw yours running I wanted no more of btcd.

I am forking the code quite heavily, a bit more than a regular altcoin fork, the reward formula will be based on exponential decay, and because of this also I can eliminate transaction fees, but I have to massively increase the precision of the coin denomination (I think it will have 12 whole number places and 64 decimals) and I am switching out siphash for highwayhash, and secp256k1 for ed25519 and a block time of 1 minute and bigger blocks.

I noticed that the code makes golint put a lot of squiggly green lines on my VScode editor display, I suppose Go's idiom has changed significantly since 2013. I will be reworking a lot of the code, but compared to working with regular C++ based cryptocurrency nodes your code is a pleasure to read, and the web interface is beautiful, and so informative. I will be watching the repository for when you complete segwit multisig, and probably it will be a month or two before I am testing it. I will also probably add a gRPC interface to it to make it easier to plug into other applications also.

I am quite sure that it will be greatly changed compared to the original but you will be most welcome to use any changes or additions I make outside of the core protocol parts. I think the web interface could be expanded to the point where it is as feature laden as bitcoin-qt, at  option, with a nice Angular/Material skin on it.

In case you are concerned about my intended use, it will not be for a business, but rather as a base token for a larger protocol based on the SporeDB BFT database replication protocol, including a DEX, forum, chat system, git repository and hopefully from there it becomes a way for programmers to get paid without having really a boss (using a reputation system and rewards distribution similar to Steem). I thought it would be a nice, and fair way, to make it open yet provide enough possible money available for developing my larger project further. I am changing the proof of work algorithm also, and I have mostly designed the underlying sort/search protocol for a variant of Cuckoo Cycle, that massively exploits data cache locality and memory block alignment which potentially could mean not much possible improvement compared to a CPU because of cache memory being a very large component of the cost of production of processors. So hopefully it will be a coin that people can mine without sudden increases in network hashrate as recently happened to Sia, Monero and all the equihash coins.

Anyway, very nice work, and I wish more programmers wrote code like you, it would make the cryptocurrency space so much more vibrant.

You may have already noticed, if github tells you when someone has forked your repository, but this is what I have done so far, though I have been busy with other work for a few days and haven't finished yet writing the ultra precision math library that will be needed for my changes, hopefully tomorrow I can start again. https://github.com/calibrae-project/spawn
76  Other / Off-topic / Hummingbird Proof of Work - a new design with a pretty diagram to explain it on: February 25, 2018, 05:37:22 AM
First with the link, it is a diagram stored on my Google Drive and pretty much covers everything, before I present some spiel about it:

https://drive.google.com/file/d/1kq9_b-n2n-5ZRYviX1YfULO7h2arSoaW/view?usp=sharing

I have developed this after examining the Cuckoo Cycle, and beginning attempts to implement it in Golang. There is already a modified version written in Golang, but it uses much shorter edge bit sizes and cycle lengths (unfortunately not working because of the included siphash amd64 assembler code in it: https://github.com/AidosKuneen/cuckoo )

Briefly, my assessment of the algorithm is that it is overly complex, the solutions will take up a lot of space and can be easily compressed by changing the algorithm, which is exactly what Hummingbird is.

Instead of requiring the actual solution coordinates being explicitly listed in the solution, it contains the nonce, a list of the indexes of the positions in the hash chain (hashes of hashes of hashes, etc) as a 16 bit value that refers to its position, which refers to 1+4x32 bit hashes, 4 for each head and tail of the vectors.

Being that this algorithm uses 32 bits for each coordinate in an Adjacency List format graph, it is likely that for a given compute capacity of the mining network will not require very long cycles to hit the block rate target.

In my analysis, Cuckoo Cycle, in principle, is a type of hash collision search that instead of searching for values with a maximum magnitude, it is just searching for a hash collision, specifically, a chain of hash collisions that do not occur more than twice before the cycle length target. The way that John Tromp designed it, is overly complex, in my view, and probably due to his intensive background in writing hashcash style algorithms in Cuda and OpenCL languages. His algorithm looks, to be precise, for subsets of hashes that collide, and requires a lot of bitwise operations, which, while not expensive, can be dispensed with entirely by using a scheme like this one I have designed.

Essentially, what I have done is make each graph coordinate set its' own individual hash result, and I picked out Murmur3 because of its speed. It is possible that a shorter hash length would make for longer cycles, and perhaps more precision for hitting block rate targets. But at the same time, the solver algorithm (see the link above) will pop out the solution in a fairly linear time based on the total amount of computation and memory access required to exactly find it, which, again as distinct from Cuckoo, has to perform the hash chain generation, sort it, and search it, whereas Hummingbird, by using 7 separate slices, searches and tabulates during each generation of an edge, so the minimum time may be shorter in some cases, and longer in others.

Hash algorithms by their nature will produce uneven hash collisions patterns based on the data, so some solutions will take less time than others. In theory every nonce can eventually be found to have all kinds of lengths given an arbitrary amount of time and data to sort through in order to discover it, but this will be capped at what adds up to an array with 128 bits with a 16 bit index, yielding a maximum of a smidge over 4gb of memory for the hash chain in the worst case, plus 128 bits for each candidate during search (so another 4gb) and again, another 4gb for the worst case for the solution/reject slice arrays.

So, in theory, the algorithm will top out around 12Gb of memory utilisation, which is part of my intention in this, to create an algorithm that requires intensive random access of memory, that usually will exceed the capacity of a current generation, economically viable video card. When this ceases to be the case (maybe another 3-5 years time) it can simply be expanded to use 32 bit array indexes in the solution, enabling a much larger search space to find longer cycles, longer hash lengths (64 bit or 128 bit) also will raise the worst case memory utilisation.

There is a small possibility that the specific hash used may have potential attacks, well, optimisations, that accelerate the process of discovering hash collisions, but by using the requirement of a hash chain, by this it is made more difficult because while you can find hash collisions, the finite field of a hash chain is distinct to the total space, a subset of the total hash collision field. In the case that an attack is found on a specific hash algorithm, it would not be difficult to switch to another hash algorithm in this event, and set back would be attackers a long way, especially if the new hash uses an entirely different category of hashing technique to the one that is compromised.

I suppose also using hash chain fields as the search space also will further weaken quantum attacks as another alternative way to accelerate the search, should qbit based computation become anywhere near competitive to silicon.
77  Alternate cryptocurrencies / Mining (Altcoins) / Re: New Cuckoo Cycle GPU solver available. Bounties included... on: February 13, 2018, 09:56:24 PM
There is only so much cloud servers, and if this were the case, then the price of them will massively ramp up (maybe time to invest in cloud server companies?)

So, I'd be talking about cuckoo34 then. That puts it beyond GPU solvers and into the domain of 32gb+ cpu/motherboard combination. This is not something that can be optimised with any kind of special card or anything. The price is pretty flat. It might raise the cost at first, but then the market would probably provide this standard at a lower cost after a while. But there's no possible way it's going to let GPUs into the game, at least not before over 3840 pixel wide screens become cheaper, to demand over 16gb of memory. I mean, I remember 75 and 80hz refresh displays, and playing games way below my hardware needs and watching things swish around. Now we are back to 60hz! URGH, TEH FLICKER!

I will just push it to see how far it goes between solutions. You are right, it might only require Cuckoo32 to put it out of reach of anyone who can't buy hundreds of cpu/motherboard/memory units to do it. Main point is that the profit per cost ratio is leveled again, and that requires putting it out of reach of GPUs. For now that sounds like Cuckoo32, but later on it may require 34.

What I meant in the last thing you responded to was, to shift the bandwidth bottleneck to the network. That would require another model completely, and actually I already thought about this idea a long time ago, of 'proof of service'. It would admittedly concentrate mining power a little bit to places where bandwidth is cheaper, but there isn't that many places where bandwidth is much cheaper than average, and they are pretty randomly distributed. Not only that, but Maidsaafe and others have already started working on this. But there is very few projects looking at making proof of bandwidth service a means to gain the right to issue new tokens.
78  Alternate cryptocurrencies / Mining (Altcoins) / Re: New Cuckoo Cycle GPU solver available. Bounties included... on: February 13, 2018, 07:18:11 PM
So, as expected (well, I had this same thought in the early days of the development of equihash) in the end memory bandwidth is the performance limiter. Thus you have discovered really quite quickly in your development cycle that you can run the solver significantly faster, with a significantly faster memory architecture and memory bus, ie, on a video card.

I know that 16gb-32gb of memory is becoming commonplace, but it occurs to me that a new target for these kinds of proof of work algorithms could be disk storage, and this can be forced by requiring over 32gb of memory. Very few people have much more than 32gb of memory on their system, where as most have more than 32gb of storage on disk. Presumably that would naturally mean that NVMe SSD's become a vessel for this.

So, I have started reading into your algorithm, and it occurs to me that you can push this out of the memory bandwidth box and onto disk simply by requiring significantly more than 8gb of memory. Very few video cards have more than 8gb of memory, and to be safe, targeting 16gb of memory would put it outside the range of GPU processing and bring it back down to CPU. Pushing the memory requirement beyond 32gb would shift the performance bottleneck into disk caches.

I haven't read perhaps as deeply as I could have so far into the algorithm, but as I gather from initial reading, you can force the memory requirement upwards by requiring longer paths in the graph. 42 is a lot, but as you have found, around 3gb is enough to store random graphs that give you this nice (funny) number of paths in a cycle. However, what would happen if you raised the minimum to, say, 56 nodes in a cycle, or 64. I would think the odds of finding this level of solution would be powers of two times the number of increased minimum nodes in a cycle, more likely beyond powers of 10 times the 42 node solutions.

As an interesting aside, these types of large graph tables are routinely used in rasterisation and ray tracing algorithms, the former have been pretty much maxed out such that photorealism requires the equivalent of a pair of 1080ti's at anything above 3840x2160 resolution, at anything more than 60 frames per second.

I am looking into this because I am seeking a PoW algorithm that will not even fall to GPU within 12 months. So, I am going to explore the Cuckoo Cycle, but with a massively increased requirement for numbers of nodes forming a loop out of a seed generating a graph. I want to see if I can force my machine into swap, and turn the NVMe drive on my machine into the primary bottleneck, which will drastically reduce the solution rate. Yet at the same time, an NVMe is not that expensive, but this bus is definitely slower than a PCI bus and slower than a memory bus.

Onward and upward... to move from memory hard to IO hard Smiley

PS: I see that the cuckoo cycle is an implementation of the graph solving puzzle to find the shortest path between some arbitrary number of nodes in a graph. It's quite interesting because this path-finding puzzle was at the centre of what enabled the implementation of internet routing. It naturally requires a certain minimum of processing and memory utilisation, even based on a static graph such as (relatively static) internet routes. It occurs to me that a next generation beyond loading up storage bandwidth would be to bind the solution to a network graph, which naturally greatly increases the latency and greatly reduces the throughput of the finding of solutions, though this also introduces byzantine attacks to cut the paths and prevent the finding of solutions, which would depend on cooperative action by nodes to respond to the solvers. Just a thought.
79  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Calibrae - Cryptocurrency and Forum based on SporeDB technology - Presale now on on: October 04, 2017, 01:59:07 AM
Sorry about this but I finally burned out on this. I have been the only one really making this happen, and though I had a few donations, I really did a lot more work than the value of these donations, but I just got so worn out from it all. I have been funding myself with a little farm of zcash miners, and I got started on this finally out of a rare turnaround in the price of Steem.

I have turned off the shop website calibr.ae because I can't afford to keep paying 29 euros a month and getting zero serious interest. I turned off the facebook page also because it was so tightly bound to the shop, and I deleted the discord server because it was just starting to really get to me how futile it all seemed... The gitlab site https://git.calibr.ae is still operational, and I will continue to work on actually building it, but don't hold your breath on how long it might take considering that it's going to be probably only me doing it for quite some time.

I would have got more code written, if I hadn't been letting people lead me astray towards making it public so soon, trying to raise funds to get it happening. I'm moving house next week, to somewhere a lot quieter and surrounded by beautiful mountains, my rent bill will be cut in half and if I can keep myself from being drawn into trying to promote the project, maybe I will actually be able to get a decent amount built.
80  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Calibrae - Cryptocurrency and Forum based on SporeDB technology - Presale now on on: October 01, 2017, 10:08:20 PM
I am not new to the crypto scene but I have been involved with online forums since 2002. I have stripes, in my nasty experiences in the darkweb. I don't think it is material what I have been through but I certainly know more about this business than 99% of anyone who is reading thtis.

I don't see what worth there is in selling a genuine product against an array of scams with a scammy looking introductions, with a scammy looking introduction. I have very substantial, and altruistic reasons for launching this project. I am trying to change the world. Fly by nights have no reason to try and change the world, because its wealth of idiots and sycophants suits them quite well. I think actually my unregaled introduction post actually stands out and grabs the eye, *because it's not trying to stand out and grab the eye*.

I was reluctantly persuaded to post on this forum, because it is generally considered the 'authority' on cryptos. But anyone who thinks that this business is not riddled with scammers and scumbags is deluded, and anyone who thinks that fancy dressing is any substitute for a real product is amongst the rabble who are just looking for a pump and dump to ride. This is not a pump and dump. We are not planning to run away anywhere after this. This is our life. Every person involved is taking a serious risk of being attacked by scumbags. We mean to make a system that actually matters.
Pages: « 1 2 3 [4] 5 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!