Bitcoin Forum
May 10, 2024, 10:43:27 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3] 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 »  All
  Print  
Author Topic: [Announce] Project Quixote - BitShares, BitNames and 'BitMessage'  (Read 48264 times)
charleshoskinson
Legendary
*
Offline Offline

Activity: 1134
Merit: 1008

CEO of IOHK


View Profile WWW
August 23, 2013, 10:54:40 PM
 #41

Quote
Four months ago?  That comment was left an hour ago and explicitly links to a URL that didn't exist until today.

https://github.com/bitcoin/bitcoin.org/pull/162#issuecomment-23188122

let's post the link for clarity. The post came from a convo with one of the posters in the repo and was part of a much larger conversation. Again it has since been deleted because it does not reflect what we are trying to accomplish

The revolution begins with the mind and ends with the heart. Knowledge for all, accessible to all and shared by all
1715381007
Hero Member
*
Offline Offline

Posts: 1715381007

View Profile Personal Message (Offline)

Ignore
1715381007
Reply with quote  #2

1715381007
Report to moderator
1715381007
Hero Member
*
Offline Offline

Posts: 1715381007

View Profile Personal Message (Offline)

Ignore
1715381007
Reply with quote  #2

1715381007
Report to moderator
I HATE TABLES I HATE TABLES I HA(╯°□°)╯︵ ┻━┻ TABLES I HATE TABLES I HATE TABLES
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715381007
Hero Member
*
Offline Offline

Posts: 1715381007

View Profile Personal Message (Offline)

Ignore
1715381007
Reply with quote  #2

1715381007
Report to moderator
charleshoskinson
Legendary
*
Offline Offline

Activity: 1134
Merit: 1008

CEO of IOHK


View Profile WWW
August 23, 2013, 11:07:19 PM
 #42

Quote
What about Mozilla Firefox support ?

Browser plugins have been explored as a way to increase both the ease of use and better integrate with existing user experiences. I'm a big fan of Cryptocat and I can imagine similar apps working well in Chrome and Firefox; however, these would be side projects pursued after the core product has been relentlessly refined.

The revolution begins with the mind and ends with the heart. Knowledge for all, accessible to all and shared by all
charleshoskinson
Legendary
*
Offline Offline

Activity: 1134
Merit: 1008

CEO of IOHK


View Profile WWW
August 23, 2013, 11:10:47 PM
 #43

Quote
Seriously just wow.

Have you got some rough list of tasks/milestones/priorities. Github? How can a volunteer get stuck in and help make this happen?

We'd love for you to attend C3 and meet us in person. We'll have a much more significant announcement there and also specific roadmaps and timetables for everyone who's like to be a volunteer, business partner or developer for the platform. We also would love to give you a software demo of our communication and ID systems. We'll bring a laptop Smiley

The revolution begins with the mind and ends with the heart. Knowledge for all, accessible to all and shared by all
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 4172
Merit: 8419



View Profile WWW
August 24, 2013, 04:33:31 AM
Last edit: August 24, 2013, 05:36:47 AM by gmaxwell
 #44

Four months ago?  That comment was left an hour ago and explicitly links to a URL that didn't exist until today.
This is a great example of how the Bitcoin core developers are shooting themselves in the foot by attacking people.  The developers want a closed little world where they make all the decision while, at the same time, they want mass adoption so their cache of Bitcoins becomes more valuable.  You can't have it both ways.
I'm not sure what you want here. I'm just stating the facts: I was confused by the response because it appeared to be claiming that I was posting old material, not stuff which hit my inbox moments before.

Needless to say, I'm not entirely enthused to find out about something new through that kind of rude message. If someone doesn't want me pointing it out, then they should refrain from sending it and darkening my inbox with it.

To the best of my ability to discern is an altcoin / opencoin competitor with a number of seemingly marginal ideas being thrown at it, it is being— in my opinion— somewhat deceptively marketed as improvements to bitcoin when its is more of an alternative.  The people responsible for it apparently (see the post) view it as a mechanism for undermining Bitcoin or at least the people who've been working on Bitcoin thus far. I, as you might expect, think thats disappointing. I see now that the message has _since_ been deleted, so it would seem that apparently the team here isn't entirely lacking in PR skills.

I'm usually pretty happy in the rare event that I see genuinely new ideas explored in altcoins, less so to see them promoted trading on Bitcoin's brand, and less so when they're really just shallow rehashings of the ideas in Bitcoin, often twiddled and changed without a deep understanding of the implications. They might have been able to convince some VC, who hasn't had the benefit of working in this space for a number of years and seen the same bad ideas over and over again, to fund this effort, but that doesn't mean I have to be impressed by it. If you want to fault me for sharing my thoughts, be my guest, but I continue to see part of my value in this community as being a person who sees through obfuscation and marketing.

Unfortunately for whatever "It's over" for me that Hos has planned, it doesn't seem likely to me that his team is technically prepared to pull it off.

Allow me to demonstrate,

The POW is a easily understood, relatively isolated part of a POW blockchain cryptocurrency. BitShares claims to have a novel POW with a number of desirable properties.  So lets take a look at it and see if it lives up to their claims.

Quoting from their PR piece:
Quote
Significantly, the Invictus Project uses a different proof of work to the two incumbent ones (Bitcoin’s SHA-256, and Litecoin’s Scrypt). The former rewards ASIC miners, while the latter rewards GPUs. The Invictus one will focus on general purpose CPUs, keeping them 32-64 times faster than a GPU for mining. To do this, it has a high RAM requirement, and also relies on sequential data processing.
I'm pretty skeptical of the claimed advantages altcoins often make about their modified POW.Like we saw with Litcoin's "GPU proof", they don't usually hold up— or have unexpected effects if any at all.  At least in the case of Litecoin they used a published cryptographic construct which had an extensive paper providing some pretty strong substantiation for its properties, novel cryptography is a real easy way to burn yourself.

The structure of their POW is that it takes a SHA256 bit hash of the block header and then repeats it to fill up a 128MByte buffer. It then runs an apparently home-brew "memory hard" function over it which perform a number of xors and makes data dependant swaps of the data. Then it runs a 128 bit non-cryptographic "CRC" function over the data, and then computes a SHA512 of that.

The code remarks:
Quote
* This proof-of-work is computationally difficult even for a single hash,
* but must be so to prevent optimizations to the required memory foot print.
* The maximum level of parallelism achievable per GB of RAM is 8, and the highest
* end GPUs now have 4 GB of ram which means they could in theory support 32
* parallel execution of this proof-of-work.
The comments continue for a screen full describing the characteristics and merits of the particular algorithm.

Indeed, as implemented, it's very expensive to run this function. This has some pretty serious negative consequences since any client of this network or full node catching up will need to do a lot of computation just to check the validity of blocks. Checking block validity is also important because valid-looking blocks being cheap to detect is normally an important anti-DOS step. But I assume they realizes this and are prepared to deal with the consequences in exchange for this function's benefits.

But does it live up to its claims?

Here is the actual code:

Code:
fc::uint128 proof_of_work( const fc::sha256& in, unsigned char* buffer_128m )
{
   const uint64_t s = MB128/sizeof(uint64_t);
   uint64_t* buf = (uint64_t*)buffer_128m;
   sfmt_t gen;
   sfmt_init_by_array( &gen, (uint32_t*)&in, sizeof(in)/sizeof(uint32_t) );
   sfmt_fill_array64( &gen, buf, s );
   uint64_t data = (buf+s)[-1];
   for( uint32_t x = 0; x < 1024; ++x )
   {
      uint64_t d = data%s;
      uint64_t tmp = data ^ buf[d];
      std::swap( buf[tmp%s], buf[d] );
      data = tmp * (x+17);
   }
   return fc::city_hash_crc_128( (char*)buffer_128m, MB128 );
}

I don't believe that it does.  Within 30 seconds of seeing the code I made the following observations:

  • The interior 128-bit bottleneck opens it up to a collision attack with an average work factor of 2^64 (and 2^64 storage, or a bit more work and less storage by constructing a rainbow table).  I consider this mostly a certificational weakness and not a practical attack, though in theory it could be perform today, especially if the memory-hardness is eliminated allowing a cheap and fast ASIC or FPGA implementation.
  • The simple xor and constants construction would almost certainly yield to boolen logic simplification, potentially even subsuming the "CRC" step since it's not intended to be a cryptographic function.
  • The memory-hardness can be removed in a probabilistic approximation of the POW function built out of the observation that 1024 is a very small fraction of 16777216 and so it's unlikely that any iterations of the interior loop will read from an entry which has already been updated. It could just be run 1024 way parallel, and will most of the time produce a correct result.
  • Alternatively, the memory-hardness can be removed by pivoting the algorithm about its main-loop and using 1024 words of storage for the 1024 possible writes the algorithm can make. This is an exact implementation, not probabilistic.

I went ahead and implemented the last in plain ANSI-C, since it seemed the most convincing:

Code:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <stdint.h>

/*Original Invictus Innovations BitShares POW inner loop.*/
void orig_f(uint64_t buf[16777216])
{
  uint64_t data=buf[16777216-1];
  uint32_t x;
  for(x=0;x<1024;++x)
  {
    uint64_t t2;
    uint64_t d=data&16777215;
    uint64_t tmp=data ^ buf[d];
    t2=buf[tmp&16777215];
    buf[tmp&16777215]=buf[d];
    buf[d]=t2;
    data=tmp*(x+17);
  }
}

/*Past state lookup function.
 *In the probabilistic/parallel version of this attack, this
 * function is eliminated and we would just assume that there
 * were no hits on the prior modification table and just merge
 * the results at the end.
 *In hardware this would get unrolled into mux-trees that worked in constant time.
 * (or more likely, you'd just use the probabilistic version in hardware)*/
static inline uint64_t cpx(const uint64_t buf[4], const uint64_t mem[2048], const int loc[2048],int i, int x)
{
  int j;
  uint64_t out;
  out=buf[x&3];
  for(j=0;j<(i<<1);j++) {
    int pos=(i<<1)-1-j;
    if(loc[pos]==x) {
      out=mem[pos];
      break;
    }
  }
  return out;
}

/*Version of orig_f that doesn't need lots of memory*/
void not_mh(uint64_t buf[16777216])
{
  /*Doing this with 1024 words instead of 2048 would
   * be pretty trivial since one of the two updates is always
   * adjusting the prior read.*/
  int loc[2048];
  uint64_t mem[2048];
  uint64_t data=buf[3]; /*Note we never read past buf[3]*/
  uint32_t x;
  for(x=0;x<1024;++x)
  {
    uint64_t t2;
    uint64_t d=data&16777215;
    uint64_t lu0=cpx(buf,mem,loc,x,d);
    uint64_t tmp=data ^ lu0;
    uint64_t mt=tmp&16777215;
    t2=cpx(buf,mem,loc,x,mt);
    loc[x<<1]=mt;
    mem[x<<1]=lu0;
    loc[(x<<1)+1]=d;
    mem[(x<<1)+1]=t2;
    data=tmp*(x+17);
  }
  /*If the CRC were a real CRC it would absolutely be possible to avoid
   * running it on the full input size, taking advantage of the fact
   * that most of the data is repeated, it still may be for
   * city_hash_crc_128 but I haven't looked.
   *In the real code, the 'CRC' would just be made to gather from
   * the sparse array. Here we just write it back out to make it easy to
   * compare.*/
  for(x=0;x<2048;x++)buf[loc[x]]=mem[x];
}

int main(void)
{
  int i;
  int tries;
  int match;
  uint64_t *buf;
  uint64_t *buf2;
  FILE *r;
  buf=malloc(16777216*sizeof(uint64_t));
  if(!buf)return 1;
  buf2=malloc(16777216*sizeof(uint64_t));
  if(!buf2)return 1;
  /*Rather than input from SHA256, we just use 256 bits from /dev/urandom.*/
  for(tries=0;tries<100;tries++) {
    r=fopen("/dev/urandom","rb");
    if(!r)return 1;
    if(fread(buf,sizeof(uint64_t),4,r)!=4)return 1;
    fclose(r);
    for(i=1;i<4194304;i++)memcpy(&buf[i*4],buf,sizeof(uint64_t)*4);
    memcpy(buf2,buf,sizeof(uint64_t)*16777216);
    /*Run the original "memory hard" function.*/
    orig_f(buf);
    /*Run the lol version.*/
    not_mh(buf2);
    match=1;
    for(i=0;i<16777216;i++)match&=buf2[i]==buf[i];
    if(match)printf("They match! (%llu)\n",(unsigned long long)buf[0]);
    else printf("Boo! No match. (%llu)\n",(unsigned long long)buf[0]);
  }
  free(buf);
  free(buf2);
  return 0;
}

This reduces the memory footprint from about 134,217,728 bytes to about 24,576 bytes and could be made half that size with slightly more work.

It outputs a whole bunch of:
Quote
They match! (10825977267245349006)
They match! (12732965183411680069)
They match! (4744567806717135351)
They match! (8276864928471265477)

Frankly, if I were out to make a bad POW that would be easy for me and hard for others I don't think I would use one as obviously flawed as this one. But at least it makes it cheaper for validating nodes to check!

I haven't looked at any other parts of the BitShare code— I was not especially inspired by this part but keep in mind: There are other cryptocoin systems which are completely closed, and I couldn't even tell you what laughably bad things they are doing. The developers here should not be penalized for building their tools in the open, not everyone does. They should be lauded for this much, if nothing else. Smiley

I honestly hope this small security / cryptography analysis is helpful, and that the price of me using it to blast Invictus Innovations a bit in public wasn't too high.  I'd recommend avoiding any crypto more novel than what the Bitcoin ecosystem is using at least unless you get a professional cryptographer on your team (who will then also tell you not to use novel crypto).

If you genuinely are interested in making a asset trading system which is complementary to Bitcoin, I'd strongly suggest merge-mining it as you would obtain the protection of the enormous hashpower held by the Bitcoin community such infrastructure would serve. It doesn't sound fantastic in a VC pitch, but if you don't intend this to be an attack on the Bitcoin ecosystem you'd enjoy a lot more security that way.  I think it's still an open question what the necessary economic incentives are for POW consensus to have lasting security...

There are a lot of people with loud ideas about how they want to change Bitcoin to make it better. Sometimes they get angry that the core developers will not consider their pet modifications. Many of the ideas are just simply bad, like this one, and would lead to insecurity or would disrupt the economic promises many users consider immutable. Often the baddness in an idea is subtle and takes a lot more work to tease out than this one, so with limited resources the onus has to be on the proposer to show that their work is necessary, beneficial, and safe. This isn't because the people with the ideas are not smart or good people, it's because ideas in this space are tricky and take a lot more consideration than many realize.
charleshoskinson
Legendary
*
Offline Offline

Activity: 1134
Merit: 1008

CEO of IOHK


View Profile WWW
August 24, 2013, 05:10:59 AM
 #45

Quote
I honestly hope this small security / cryptography analysis is helpful, and that the price of me using it to blast Invictus Innovations a bit in public wasn't too high.  I'd recommend avoiding any crypto more novel than what the Bitcoin ecosystem is using at least unless you get a professional cryptographer on your team (who will then also tell you not to use novel crypto).

If you genuinely are interested in making a asset trading system which is complementary to Bitcoin, I'd strongly suggest merge-mining it as you would obtain the protection of the enormous hashpower held by the Bitcoin community such infrastructure would serve. It doesn't sound fantastic in a VC pitch, but if you don't intend this to be an attack on the Bitcoin ecosystem you'd enjoy a lot more security that way.  I think it's still an open question what the necessary economic incentives are for POW consensus to have lasting security...

There are a lot of people with loud ideas about how they want to change Bitcoin to make it better. Sometimes they get angry that the core developers will not consider their pet modifications. Many of the ideas are just simply bad, like this one, and would lead to insecurity or would disrupt the economic promises many users consider immutable. Often the baddness in an idea is subtle and takes a lot more work to tease out than this one, so with limited resources the onus has to be on the proposer to show that their work is necessary, beneficial, and safe. This isn't because the people with the ideas are not smart or good people, it's because ideas in this space are tricky and take a lot more consideration than many realize.

Gmax, I appreciate the feedback and thanks for your brief analysis of our alpha code and place holder PoW. The code base is highly fluid at this point and not sufficiently formed to survive even a rudimentary cryptanalysis. I think you tend to forget the amount of time and effort it took for Bitcoin to evolve and harden. We are at the beginning of this process and focused on a great many moving pieces that will soon come together.

The PoW you analyzed was never intended for a production system as we were much more concerned with other issues in the ecosystem and was scheduled to be updated with a new design in early September that corresponds to the statements made in the Coindesk article. You may have noticed the code wasn't even touched until today for more than a month. We really do appreciate you taking the time to look at the repo and would love for you to drop by from time to time to challenge us on specific decisions.

We are also going to offer some bounties after the C3 conference and it appears you have an opportunity to make some Bitcoin off of our emerging Crypto. Thanks for your time

The revolution begins with the mind and ends with the heart. Knowledge for all, accessible to all and shared by all
charleshoskinson
Legendary
*
Offline Offline

Activity: 1134
Merit: 1008

CEO of IOHK


View Profile WWW
August 24, 2013, 05:15:07 AM
 #46

Quote
All you post is a bunch of hyperbolic crap that never addresses the issue.  You and small group of friends lock most people out of most decisions and if they complain you start with your rambling complaints and attacks.  If you want large adoption you have to give up some of the power just like Gavin gave away many Bitcoin with the faucet.  You guys should be lauded for the development of the software but that will all be ruined if you keep locking people out of the process.

Milly, I understand your sentiment and share a desire for more openness; however, this isn't the thread to discuss such matters. Honestly we are just trying to get the community to focus on our efforts and try to answer questions about high level aspects of a both very exciting and very complex system. Rehashing old fights and performing a cryptanalysis on alpha placeholder code isn't productive for this thread. I mean we didn't even use a CSPNG to populate the memory.

I'd be happy to debate Gmax and other alongside you in a different thread at any time.

The revolution begins with the mind and ends with the heart. Knowledge for all, accessible to all and shared by all
bytemaster (OP)
Hero Member
*****
Offline Offline

Activity: 770
Merit: 566

fractally


View Profile WWW
August 24, 2013, 05:54:06 AM
 #47

gmaxwell,
   You have stumbled upon an area in our code base that was under active experimentation and the problems you identified with it actually pail in comparison to the problems we found with it prior to your post.  For starters, we were experimenting with various ways of accelerating the validation time and profiling different settings the inner loop.  The code that was checked in just happened to be the last benchmark for that round of testing before we put proof of work on the back burner until we could come back to it.     The random number generator that populated the whole thing was not secure and you could quickly calculate any index required.

   We have identified the principles and design specs for our proof of work with the primary goal of making it memory hard.  Finding a collision on 128 bit hash function that takes this long to run and simultaneously satisfies all of the requirements for the block chain is not a legitimate concern given the time constraints and you rightly pointed out that it was theoretical.    Assuming a truly memory-hard hash function without the obvious weaknesses of the code you reviewed and assumed was our 'production' algorithm 128 bits should be plenty.  Now obviously, cryptographers like to error on the side of caution and we would be looking for community consensus / review of every cryptographically significant part of our code before ever launching it into the wild.  We are obviously trying to balance security-over-kill with decentralization and bandwidth.

   Even with all of the faults of the test algorithm it is still no worse than Bitcoin because it ultimately relies on sha256().  

   So we have high goals, solid guiding principles of attempting to create a memory hard hash function, and the ultimate ace-in-the-hole to ensure the long-term viability is flexibility.   The 'currency contract' between the developers, users, and miners was to establish the common intention that at any time if the hash function was optimized for GPU or ASIC it could and would be changed by a majority vote (hash power).   The CPU holders would have too much to lose not to vote for a change that undermined the ASICs and there mere threat of this option would prevent the production of ASICs in the first place.  

   What I gather from your post is that we shouldn't attempt innovate and we should conform to the status quo.   The real question is this:  is it possible to develop an algorithm that is both secure, requires 128 MB of ram, and yet can be validated in a fraction of a second?    I believe the answer to that question is *yes* it is possible and therefore it can be achieved.

   There are two kinds of people in this world, those with a can-do attitude and those who go around telling everyone that it is impossible for man to fly.    
    
   One last factor as to why your merged mining suggestion is horribly flawed is because it means that the existing Bitcoin Miner Barons would 'own' the network from day one, especially if we supported merged mining with bitcoin and bloated our blockchain in the process.   This is not in the best interest of anyone but the Miner Barons.  

   There is a reason why everything is open source, because with enough eyes on the problem it will get solved.  The true value of our system is not the hash-function.  We are just trying to make things better while launching a new chain where the true value is the economic transactions it can enable.

Dan        

https://fractally.com - the next generation of decentralized autonomous organizations (DAOs).
harounkola
Newbie
*
Offline Offline

Activity: 17
Merit: 0



View Profile WWW
August 24, 2013, 07:39:21 AM
 #48

Sounds fascinating. I'm watching and following this too
 Roll Eyes
bytemaster (OP)
Hero Member
*****
Offline Offline

Activity: 770
Merit: 566

fractally


View Profile WWW
August 24, 2013, 07:42:49 AM
Last edit: August 24, 2013, 08:04:08 AM by bytemaster
 #49

Design Requirements of Proof-of-Work:

1) Require a relatively large amount of RAM that cannot be 'optimized away' through various techniques such as:
      - streaming the data, random sampling, or jump ahead short cuts.
2) Must be able validate the proof-of-work in about 0.25 seconds, this would require several hours to verify the whole chain's work.
3) Must rely on CPU instructions that have no equivalent on the GPU.

The challenge with such a proof of work is keeping the validation time relatively fast and this is where the temptation to take short cuts comes to play.  It is trivial to create a 'secure' CPU-only hash, the challenge is making it fast to validate.    I submit the following revision to the proof of work as an example of what can be done in a brute force manner that should address every single item presented by gmaxwell.

Code:
pow_hash proof_of_work( const fc::sha256& iv, unsigned char* buffer_128m )
{
   auto key = fc::sha256(iv);
   const uint64_t  s = MB128/sizeof(uint64_t);
   uint64_t* buf = (uint64_t*)buffer_128m;
   memset( buffer_128m, 0, MB128/2 );
  
   fc::aes_encrypt( buffer_128m, MB128/2, (unsigned char*)&key, (unsigned char*)&iv,
                    buffer_128m + MB128/2 );
  
   uint64_t offset = buf[s-1] % ((MB128/2)-1024);
   fc::sha512 new_key = fc::sha512::hash( (char*)(buffer_128m + offset + MB128/2), 1024 );
                    
   fc::aes_encrypt( buffer_128m + MB128/2, MB128/2, (unsigned char*)&new_key,
                                                    ((unsigned char*)&new_key) + 32,
                    buffer_128m  );

   auto midstate =  fc::city_hash_crc_256( (char*)buffer_128m, MB128 );
   return fc::ripemd160::hash((char*)&midstate, sizeof(midstate) );
}

First, lets consider the input, sha256( merged_mining_merkel_root + nonce), this is as secure as double sha256(bitcoin header).  
Next, we populate 64 MB of memory via AES encryption of 64 MB of 0's.    There is no way to jump-ahead with this random number generator and it utilizes Intel AES hardware acceleration.
Next, we use the last encrypted uint64 to pick a random KB of the encrypted data to sha512 hash and generate a new initial value + key for a second round of AES.
This second round of AES encrypts the results of the first round, which requires the whole first round to be kept in memory and also utilizes Intel AES hardware.
Then we take a 256 bit digest using more CPU-only instructions via city_hash 256.
Lastly we end the whole operation with a crypto-graphically secure hash to compress it down to 160 bits which given the requirements of the input is more than enough bits to prevent rainbow tables.

I can mine this at 5 hz with 8 cores on a Macbook Pro Core i7.   This doesn't quite hit my design goal because on a single core it takes over 1 second to validate a single hash and thus validating the proof-of-work on 1 years worth of blockchain headers would take 24 hours.  So now the only question that remains is how can we accelerate this hash without compromising either the memory requirements or the security.    

It is worth noting that even at 1 second per block validation time will be overwhelmingly in the transaction validation of the chain, not the proof of work (assuming a reasonable number of transactions).  That said, I would still like faster validation times.

Options:

1) Reduce the memory requirement, this will increase performance linearly while making ASICs more viable.
2) Only perform 'spot-checking' with the second round of aes encryption, this could provide a 25 to 33% gain
3) Potentially replace AES with Salsa20... though AES enjoys a hardware advantage that Salsa20 would lack.

Potential Attacks
An ASIC could implement a very fast AES algorithm, run through the first 64 MB to calculate the last 8 bytes, then calculate the a second time to find the key, then a 3rd and 4th time to generate the 2nd round and city hash.  Such an ASIC would probably take 4 times as long to run each hash, but would be freed from the memory constraint and could therefore do more hashes in parallel.     This attack could probably be mitigated by a few changes that would require the ASIC to run through everything 1000 times rather than just 4 times.   But ultimately, the only defense against ASIC is change they are like a virus that will mutate and adapt to any hash algorithm we create.   We just need to slow them down and reduce the magnitude of advantage they might see.

Conclusion:
The real goal of the hash algorithm is to control the block-production rate and keep verification decentralized while preventing 'forged blocks' from being created.  Every other criterion (ram, instruction, etc) are just means to an end not the end itself.  The only true security against ASICs is to change the hashing algorithm every year or so to something of roughly the same level of CPU difficulty.    In the mean time all that is required is a secure hashing algorithm that is GPU resistant.

To that end, I suspect that reducing the RAM requirements down to 8 or 16 MB will enable validation times that are sufficiently fast while keeping GPUs at bay with a hashing algorithm that leverages SSE encryption instructions.  

We are committed to delivering a working product and getting the best minds working with us that we can.  

https://fractally.com - the next generation of decentralized autonomous organizations (DAOs).
bytemaster (OP)
Hero Member
*****
Offline Offline

Activity: 770
Merit: 566

fractally


View Profile WWW
August 24, 2013, 04:23:41 PM
 #50

the only defense against ASIC

I am not convinced ASICs is something bad.  Right now you see all these stories of how ASICS is going to centralize mining and how companies are supposedly making large investments in chips to mine for themselves, etc.  However, that is a short term effect and I think the development costs will quickly surpass the amount that can be mined.  Once that equilibrium is reached I am not sure what will happen.  A huge investment in mining equipment right now is high risk because you do not know how many other people are producing ASICs. 

In other words if you make it difficult to create ASICs then a few rich innovative people will make them anyway and it will centralize things.  Examples are the first people to figure out GPU mining and then the story of Avalon where one guy had tray full of chips that was most of the hashing power of the network.  If you choose an algorithm that makes ASICs easy to produce then many people can produce them and you have less centralization.

The nice thing about the proof of work algorithm is that it is the easiest thing in the world to change.  Especially if both the code and the community is prepared to do so.   All of that said, I would rather not focus so much of this discussion on proof of work because that is the least significant aspect of our system.

https://fractally.com - the next generation of decentralized autonomous organizations (DAOs).
Luckybit
Hero Member
*****
Offline Offline

Activity: 714
Merit: 510



View Profile
August 24, 2013, 06:11:05 PM
Last edit: August 24, 2013, 06:37:44 PM by Luckybit
 #51

I don't know how i would ever get anything done hanging out in an IRC channel all day.     We have our web developer actively working to setup forums dedicated to this project and until then this thread will be the one were conversation on this topic will be focused.

At least make a twitter account and a Facebook page.
Hi bytemaster,

I finally took a look at your whitepaper; this latest version of your project is much more ambitious than I had imagined! The sheer scope of what you are attempting boggles my mind.

Do you have a feeling for when mining of bitshares will begin? Am I correct in assuming that mining will begin well before the planned feature set is complete? How much code has been written so far?

Best of luck!

This software will be released in phases testing the less critical aspects throughly with a large user base of 'Bit Message' and 'BitShares ID' users who can use it for communication without really risking much financial value.   The goal is to have the Test Network up by Thanksgiving and hopefully launch the live network as soon as we can go a month without any major new bugs showing up.    The blockchain should support short/long, options, and cross-chain trading, multi-sig, and simple escrow at launch, though full support for the escrow system will be built out over coming year.  

While the block chain will be ready and usable with an RPC / command line interface, the GUI will take longer to mature.   Our schedule is highly dependent upon finding good developers and testers!



Can it be a web app? What about a high level API so we can program for it in Python or Ruby?
charleshoskinson
Legendary
*
Offline Offline

Activity: 1134
Merit: 1008

CEO of IOHK


View Profile WWW
August 24, 2013, 06:32:11 PM
 #52

Quote
Can it be a web app? What about a high level API so we can program for it in Python or Ruby?

There are plans to do so with the Hydra client using Javascript, HTML5 and CSS3 and we've discuss web apps with the web payments group at the W3C alongside browser extensions (which seem to be slowly fading out). When we launch our first beta we'll have some spec on how extensions will work and also a roadmap for our API library.

The revolution begins with the mind and ends with the heart. Knowledge for all, accessible to all and shared by all
td services
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


black swan hunter


View Profile
August 24, 2013, 10:22:58 PM
 #53

At least make a twitter account and a Facebook page.

Twitter and Facebook are proprietary platforms, have been compromised by the NSA to spy on their users, and don't really offer any useful functionality beyond what can be achieved with a web site, email list, and forum.
td services
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


black swan hunter


View Profile
August 24, 2013, 10:44:20 PM
 #54

I'm very interested in Bitshares and have questions on mining. I very much like that it favors CPUs over ASICs and GPUs.

Questions:

1. math expression to predict performance - will performance be a function of CPU speed, core quantity, and available RAM, like:

Hashes/second=A*(CPU speed GHz)*(CPU core quantity)*(RAM size Gbytes) where A is a constant

or maybe

Hashes/second=A*(CPU speed GHz)B*(CPU core quantity)C*(RAM size Gbytes)D where A,B, C and D are constants

another possibility is

Hashes/second=(A*CPU speed GHz)B*(C*CPU core quantity)D*(E*RAM size Gbytes)F where A,B, C, D, E and F are constants

2. Will mining work equally well on Intel, AMD, and ARM processors?

3. Will multi core CPUs have higher performance per core than single core CPUs?

4. Is speed bottleneck from CPU internal clock speed or memory bus speed.

These factors will determine if it would be better to use high end multi core CPUs and fast memory or if low cost System-on-Chip boards can be used to achieve the lowest mining cost (Giga Hash?) per dollar.

I see a huge potential for this project if it can achieve its stated aims. Biggest risk I see would be to invest a lot in mining and then have the project not fulfill the rest of the planned features.
bytemaster (OP)
Hero Member
*****
Offline Offline

Activity: 770
Merit: 566

fractally


View Profile WWW
August 24, 2013, 11:19:29 PM
 #55

As you can tell from this thread the mining algorithm is still under flux, but the goals are as you stated favor CPU over everything else.

Based upon my most recent thoughts (pending experiments) here is what I expect:

If the algorithm accesses the memory in random order that invalidates the CPU cache, then memory bus bandwidth will be the bottleneck.  After thinking about it, the memory bus is something that could be optimized out by an ASIC so this is not a desirable situation.

The algorithm needs to be 'fast to validate' which also implies you do not want to be memory bus bound.

The result is that I will probably target the hash memory requirements to near 8 MB (Core i7 CPU cache) which means that your best performance will probably be from single-core operation (multiple cores would start cache thrashing).  As Intel releases new chips with more cache you can start using more cores to positive effect.

A GPU has several different memory classes (global, shared, and local) and the global memory is much slower (like CPU to RAM) than the shared or local memory which behave like CPU cache.  So an algorithm that will fit in cache on a CPU will result in cache thrashing on a GPU.   This cache-thrashing combined with an unpredictable fetch pattern means the GPU will be stalled waiting on data most of the time.    GPU shared and local cache sizes are under 1 MB. 

Based upon transistor count, I would expect the highest end FPGA with 8 billion transistors (compared to 1 billion for an i7) to have a maximum gain of 8x assuming the ratio of transistors to execution units and cache was the same and there is no transistor count overhead or clock frequency disadvantages in the FPGA compared to an Intel ASIC.    By relying on Intel's AES instructions for 90% of the hash, you already have an almost ideal ASIC right in your computer.

So to answer your question:  CPU cache and AES hardware instructions will determine mining performance in one case, and memory bus speeds in the other. 

Note that the real protection against ASIC will be the community consensus and will to change the hashing algorithm to keep it CPU bound.


https://fractally.com - the next generation of decentralized autonomous organizations (DAOs).
Luckybit
Hero Member
*****
Offline Offline

Activity: 714
Merit: 510



View Profile
August 24, 2013, 11:26:23 PM
Last edit: August 24, 2013, 11:40:49 PM by Luckybit
 #56

At least make a twitter account and a Facebook page.

Twitter and Facebook are proprietary platforms, have been compromised by the NSA to spy on their users, and don't really offer any useful functionality beyond what can be achieved with a web site, email list, and forum.

But that is where the people are. A lot of people use and are on twitter and for communication purposes its just fine. What does it matter if the NSA sees some tweet telling people the software is passing some milestone?

But that is fine, where is the mailing list?
As you can tell from this thread the mining algorithm is still under flux, but the goals are as you stated favor CPU over everything else.

Based upon my most recent thoughts (pending experiments) here is what I expect:

If the algorithm accesses the memory in random order that invalidates the CPU cache, then memory bus bandwidth will be the bottleneck.  After thinking about it, the memory bus is something that could be optimized out by an ASIC so this is not a desirable situation.

The algorithm needs to be 'fast to validate' which also implies you do not want to be memory bus bound.

The result is that I will probably target the hash memory requirements to near 8 MB (Core i7 CPU cache) which means that your best performance will probably be from single-core operation (multiple cores would start cache thrashing).  As Intel releases new chips with more cache you can start using more cores to positive effect.

A GPU has several different memory classes (global, shared, and local) and the global memory is much slower (like CPU to RAM) than the shared or local memory which behave like CPU cache.  So an algorithm that will fit in cache on a CPU will result in cache thrashing on a GPU.   This cache-thrashing combined with an unpredictable fetch pattern means the GPU will be stalled waiting on data most of the time.    GPU shared and local cache sizes are under 1 MB.  

Based upon transistor count, I would expect the highest end FPGA with 8 billion transistors (compared to 1 billion for an i7) to have a maximum gain of 8x assuming the ratio of transistors to execution units and cache was the same and there is no transistor count overhead or clock frequency disadvantages in the FPGA compared to an Intel ASIC.    By relying on Intel's AES instructions for 90% of the hash, you already have an almost ideal ASIC right in your computer.

So to answer your question:  CPU cache and AES hardware instructions will determine mining performance in one case, and memory bus speeds in the other.  

Note that the real protection against ASIC will be the community consensus and will to change the hashing algorithm to keep it CPU bound.



What is to stop it from becoming centralized by virtual machines?

One thing virtual machines are not good at is random number generation. Maybe you should find a way to make people use real cpus? Otherwise you'll end up with even more centralization than you'd have from ASICs.
bytemaster (OP)
Hero Member
*****
Offline Offline

Activity: 770
Merit: 566

fractally


View Profile WWW
August 24, 2013, 11:41:43 PM
 #57

As you can tell from this thread the mining algorithm is still under flux, but the goals are as you stated favor CPU over everything else.

Based upon my most recent thoughts (pending experiments) here is what I expect:

If the algorithm accesses the memory in random order that invalidates the CPU cache, then memory bus bandwidth will be the bottleneck.  After thinking about it, the memory bus is something that could be optimized out by an ASIC so this is not a desirable situation.

The algorithm needs to be 'fast to validate' which also implies you do not want to be memory bus bound.

The result is that I will probably target the hash memory requirements to near 8 MB (Core i7 CPU cache) which means that your best performance will probably be from single-core operation (multiple cores would start cache thrashing).  As Intel releases new chips with more cache you can start using more cores to positive effect.

A GPU has several different memory classes (global, shared, and local) and the global memory is much slower (like CPU to RAM) than the shared or local memory which behave like CPU cache.  So an algorithm that will fit in cache on a CPU will result in cache thrashing on a GPU.   This cache-thrashing combined with an unpredictable fetch pattern means the GPU will be stalled waiting on data most of the time.    GPU shared and local cache sizes are under 1 MB. 

Based upon transistor count, I would expect the highest end FPGA with 8 billion transistors (compared to 1 billion for an i7) to have a maximum gain of 8x assuming the ratio of transistors to execution units and cache was the same and there is no transistor count overhead or clock frequency disadvantages in the FPGA compared to an Intel ASIC.    By relying on Intel's AES instructions for 90% of the hash, you already have an almost ideal ASIC right in your computer.

So to answer your question:  CPU cache and AES hardware instructions will determine mining performance in one case, and memory bus speeds in the other. 

Note that the real protection against ASIC will be the community consensus and will to change the hashing algorithm to keep it CPU bound.



What is to stop it from becoming centralized by virtual machines?

One thing virtual machines are not good at is random number generation. Maybe you should find a way to make people use real cpus? Otherwise you'll end up with even more centralization than you'd have from ASICs.

Virtual machines are still cache limited and incur extra overhead.


https://fractally.com - the next generation of decentralized autonomous organizations (DAOs).
bytemaster (OP)
Hero Member
*****
Offline Offline

Activity: 770
Merit: 566

fractally


View Profile WWW
August 24, 2013, 11:48:44 PM
 #58

Mining is interesting and all, but I really thought there would be much more discussion about the revolutionary nature of BitUSD and BitGold as well as options and shorts.   


https://fractally.com - the next generation of decentralized autonomous organizations (DAOs).
jedunnigan
Sr. Member
****
Offline Offline

Activity: 279
Merit: 250


View Profile
August 25, 2013, 12:09:28 AM
 #59

Mining is interesting and all, but I really thought there would be much more discussion about the revolutionary nature of BitUSD and BitGold as well as options and shorts.   



Give it time. There is quite a bit to digest in your white paper.
td services
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


black swan hunter


View Profile
August 25, 2013, 01:12:50 AM
 #60

Mining is interesting and all, but I really thought there would be much more discussion about the revolutionary nature of BitUSD and BitGold as well as options and shorts.   



I'm already enthused on the possibilities, most interested in the nuts 'n' bolts of mining to be prepared with an optimized platform when it starts since this is one of the shortest lead items per your announced schedule.
Pages: « 1 2 [3] 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!