Come-from-Beyond
Legendary
Offline
Activity: 2142
Merit: 1010
Newbie
|
|
October 02, 2015, 05:05:32 PM |
|
EMUNIE Can I just politely ask the audience to use the correct way to write the brand: eMunieThank you I wrote "eMunie" but then saw the thead title...
|
|
|
|
skywave
Sr. Member
Offline
Activity: 420
Merit: 250
"to endure to achieve"
|
|
October 02, 2015, 05:42:12 PM |
|
@Come-from-Beyond Haha you are right - I didn't even notice that in the thread title..
|
|
|
|
worhiper_-_
|
|
October 03, 2015, 12:00:08 AM |
|
yeah. funny how every thread in this forum devolves into "my coin has more nodes than your coin"
You'd think you stepped into 2013, walking in here. right?
People get salty when it comes to new released. It seems like bringing development under covers and moving to a dedicated forum worked worked for the team. We'll be able to judge them by the innovation after the release, even better after the source is out after the release.
|
|
|
|
chennan
Legendary
Offline
Activity: 1316
Merit: 1004
|
|
October 03, 2015, 05:55:53 AM |
|
Is it possible to invest in this project?
Not yet, its not launched and there is no public fundraiser scheduled atm. This is a cool project, and cryptos need to find a way to be as fast as possible for the masses to want to adopt, due to their fast paced life styles, IMO... so when do you think the official release of this coin will be? Any ideas?
|
|
|
|
Anima
Member
Offline
Activity: 63
Merit: 10
|
|
October 03, 2015, 10:39:07 AM |
|
Is it possible to invest in this project?
Not yet, its not launched and there is no public fundraiser scheduled atm. This is a cool project, and cryptos need to find a way to be as fast as possible for the masses to want to adopt, due to their fast paced life styles, IMO... so when do you think the official release of this coin will be? Any ideas? There are no "deadlines" - we will release when its done. No Deadlines allows us to experiment and find new solutions even though we thought we had found it - and test those out.. We probably have disapointed more people stating deadlines than with no deadlines because people won't get all exiting we are gonna launch soon with tech we have much improved only to get their hopes down. If you're really interested in the project, you can go to our forum and ask to be a beta tester and help test it out after the founder tests (who tests the "alpha" versions). There will soon be a new beta to be tested.
|
Best regards from Anima - proud member of the Radix team.
|
|
|
Fuserleer (OP)
Legendary
Offline
Activity: 1064
Merit: 1020
|
|
October 03, 2015, 10:45:22 AM |
|
Is it possible to invest in this project?
Not yet, its not launched and there is no public fundraiser scheduled atm. This is a cool project, and cryptos need to find a way to be as fast as possible for the masses to want to adopt, due to their fast paced life styles, IMO... so when do you think the official release of this coin will be? Any ideas? There are no "deadlines" - we will release when its done. No Deadlines allows us to experiment and find new solutions even though we thought we had found it - and test those out.. We probably have disapointed more people stating deadlines than with no deadlines because people won't get all exiting we are gonna launch soon with tech we have much improved only to get their hopes down. If you're really interested in the project, you can go to our forum and ask to be a beta tester and help test it out after the founder tests (who tests the "alpha" versions). There will soon be a new beta to be tested. I was just about to post this very same. "Its ready when its ready" is our mentality. I made the mistake of setting deadlines in the past, only to find I wasn't happy with components of the platform, or had discovered a better way to do things. I've probably thrown away more code and smart ideas developing eMunie over 2+ years than most of crypto combined! IMO a better product, in any way, is always worth sacrificing deadlines for, providing that myself and those with a real vested interest all agree. So I don't set deadlines anymore. With that in mind, the core design hasn't changed for 9+ months now, and I don't see any better ways to do the things we want to do in order to achieve the goals we set, most are already achieved. Take from that what you will
|
|
|
|
chennan
Legendary
Offline
Activity: 1316
Merit: 1004
|
|
October 03, 2015, 07:23:37 PM |
|
Is it possible to invest in this project?
Not yet, its not launched and there is no public fundraiser scheduled atm. This is a cool project, and cryptos need to find a way to be as fast as possible for the masses to want to adopt, due to their fast paced life styles, IMO... so when do you think the official release of this coin will be? Any ideas? There are no "deadlines" - we will release when its done. No Deadlines allows us to experiment and find new solutions even though we thought we had found it - and test those out.. We probably have disapointed more people stating deadlines than with no deadlines because people won't get all exiting we are gonna launch soon with tech we have much improved only to get their hopes down. If you're really interested in the project, you can go to our forum and ask to be a beta tester and help test it out after the founder tests (who tests the "alpha" versions). There will soon be a new beta to be tested. I was just about to post this very same. "Its ready when its ready" is our mentality. I made the mistake of setting deadlines in the past, only to find I wasn't happy with components of the platform, or had discovered a better way to do things. I've probably thrown away more code and smart ideas developing eMunie over 2+ years than most of crypto combined! IMO a better product, in any way, is always worth sacrificing deadlines for, providing that myself and those with a real vested interest all agree. So I don't set deadlines anymore. With that in mind, the core design hasn't changed for 9+ months now, and I don't see any better ways to do the things we want to do in order to achieve the goals we set, most are already achieved. Take from that what you will Nice, well will you just be opening up another thread asking for more testers for the new up coming beta version? Or will you just be posting that information here? Regardless I'm going to be monitoring this crypto, so I hope the best for what you guys are trying to achieve
|
|
|
|
Fuserleer (OP)
Legendary
Offline
Activity: 1064
Merit: 1020
|
|
October 03, 2015, 07:28:52 PM |
|
I'll make a thread on here when the time comes.
|
|
|
|
Anima
Member
Offline
Activity: 63
Merit: 10
|
|
October 05, 2015, 12:56:46 PM |
|
I don't need to know how Bitshares works in detail to apply general computer science theory to the claims made. ...ell seeing as Larimer himself said the solution is to "...keep everything in RAM..." how much RAM to do think is required to keep up with a sustained 100,000 tps if it is indeed true?
Just for the record, cross post from https://bitcointalk.org/index.php?topic=1196533.msg12575441#msg12575441I want to address the MAJOR misconception and that is that we keep all transactions in RAM and that we need access to the full transaction history to process new transactions.
The default wallet has all transactions expiring just 15 seconds after they are signed which means that the network only has to consider 1,500,000 * 20 byte (trx id) => 3 MB of memory to protect against replay attacks of the same transaction.
The vast majority of all transactions simply modify EXISTING data structures (balances, orders, etc). The only type of transaction that increases memory use permanently are account creation, asset creation, witness/worker/committee member creation. These particular operations COST much more than operations that modify existing data. Their cost is derived from the need to keep them in memory for ever.
So the system needs the ability to STREAM 11 MB per second of data to disk and over the network (assuming all transactions were 120 bytes).
If there were 6 billion accounts and the average account had 1KB of data associated with it then the system would require 6000 GB or 6 TB of RAM... considering you can already buy motherboards supporting 2TB of ram and probably more if you look in the right places (http://www.eteknix.com/intels-new-serverboard-supports-dual-cpu-2tb-ram/) I don't think it is unreasonable to require 1 TB per BILLION accounts. Ok that clears that up, maybe he should be a bit more clear in future about what exactly "...keep everything in RAM..." means. It still leaves a lot of questions unanswered regarding that claim though, specifically the IO related ones. Streaming 11MB from disc doesn't sound like its too hard, but it depends on a number of factors. Reading one large consecutive 11MB chunk per second is of course childs play, but if you are reading 11MB in many small reads (or worse still if its a mechanical platter drive and is fragmented) then that simple task becomes not so simple. Also, network IO seems to have some potential issues. 11MB/s down stream isn't too much of a problem, a 100Mbit downstream line will just about suffice, but what about upstream? I'm assuming (so correct me if Im wrong), that these machines will have numerous connections to other machines, and will have to relay that information to other nodes. Even if each node only has a few connections (10-20), but has to relay a large portion of those 100,000 tps to each of them, upstream bandwidth requirements for that node quickly approach multiple gigabits in the worst case. Further more, lets assume that Bitshares is a huge success, is processing just 10,000 tps sustained and that none of these issues exist other than storage. As Bitshares relies on vertical scaling, and we've already determined that 100,000 tps = ~1TB of data a day, 10,000 tps = 100 GB daily. Operators of these machines are going to be spending a lot of dollar on fast drive space and have to employ sophisticated storage solutions in order to keep pace. This becomes quite insane at the 100,000 tps level (365TB per year), perhaps Bitshares has some chain pruning or mechanisms to keep this down? (I hope so!) Finally back to RAM requirements, what are the measures or mechanisms in place to prevent someone from creating 1Billion or more empty accounts, and causing RAM requirements to shot upwards as this information is kept in RAM? A few machines could easily do this over the course of a couple of weeks if there are no other costs associated with it, I assume there is some filtering to only keep accounts with activity in RAM as otherwise this will be a major issue. Eitherway, this is just another example how vertically scaled systems are not viable, should Bitshares grow to the level where it is processing 100,000s of transactions per second and has even a few 100M accounts, you need a machine with 100s of GB of RAM, 100s of TB of storage, and internet connections in the multiple GB speeds.....not really accessible to the man on the street. Perhaps the cost of participating at that level just isn't an issue, as Bitshares has always had a semi-centralized element to it anyway, and most supporters of it don't seem to mind it. For me though, relying on ever increasing hardware performance and sacrificing core principles which brought us all here in the first place is a cop out. 2046483ms th_a application.cpp:516 get_item ] Couldn't find block 00008e220adc1561e0ceb4964000000000000000 -- corresponding ID in our chain is 00008e220adc1561e0ceb496e2fe61effc44196e 2046486ms th_a application.cpp:432 handle_transaction ] Got transaction from network ./run.sh: line 1: 8080 Segmentation fault ./witness_node --genesis-json "oct-02-testnet-genesis.json" -d oct02 -w \""1.6.0"\" -w \""1.6.1"\" -w \""1.6.2"\" -w \""1.6.3"\" -w \""1.6.4"\" -w \""1.6.5"\" -w \""1.6.6"\" -w \""1.6.7"\" -w \""1.6.8"\" -w \""1.6.9"\" -w \""1.6.10"\" -w \""1.6.11"\" --replay-blockchain This is what the init node said before it died during the flood. We are looking into what could have caused it. As far as release plans go, we will protect the network from excessive flooding by rate limiting transaction throughput in the network code. We recently made a change that allowed a peer to fetch more than one transaction at a time and that change is what allowed us to hit 1000+ TPS. That change had the side effect of making the network vulnerable to flooding. For the BTS2 network we will revert to only fetching 1 transaction at a time which will limit throughput of the network to under 100 TPS. This will only be a limit in the P2P code which can be upgraded at any time without requiring a hard fork. If the network is generating anywhere near 100 TPS per second then the network will be earning more than $1M per day in fees and our market cap would be closer to Bitcoins market cap. In other words, this should have 0 impact on customer experience over the next several months. By the time we start gaining traction like that we will have worked through the kinks of getting a higher throughput network layer. So as can be evidenced here, while Bitshares did reach 1000 tps (peak!) during their internal tests, it caused stability issues that put the test to a halt. In order to resolve it, they have pegged the throughput to 100 tps max due to network IO issues. Link to post https://bitsharestalk.org/index.php/topic,18717.msg241280.html#msg241280
|
Best regards from Anima - proud member of the Radix team.
|
|
|
Fuserleer (OP)
Legendary
Offline
Activity: 1064
Merit: 1020
|
|
October 24, 2015, 12:22:08 PM |
|
Small update on our testing. We've been running a beta test for a couple of days now and did some sustained and burst performance tests to see how things might go in the real world. Network was about 30 machines, some of which were very low powered, I think we had a couple of micro-EWS Amazon instances and even an Odroid! The network was also configured to operate in single partition, multiple partition tests will commence soon (near linear multiplier). First we did a couple of transaction burst tests, IIRC we hit a peak of 550 tx/s, but the guy making the video didn't catch it. The next one he did and we managed 400 tx/s spike on this one https://www.youtube.com/watch?feature=player_embedded&v=L-VBp2lAI5ISecondly, we set up a steady sustained spam of about 100-200 tx/s over about 30 mins or so, and then threw multiple bursts on top of that https://www.youtube.com/watch?feature=player_embedded&v=8jSiUE8VEWcEnjoy!
|
|
|
|
TPTB_need_war
|
|
November 06, 2015, 02:25:21 PM Last edit: November 06, 2015, 10:50:09 PM by TPTB_need_war |
|
Let's talk software engineering a bit... Hmmm...Ive found that the major bottlenecks on lower end stuff is actually the IO DB writes/reads and not so much crypto related stuff. Sure it has a positive effect if you can speed it up, but a good 70%+ of optimizing I do is how to get data over the IO quicker and more efficiently.
That was like word for word what Bytemaster said in this youtube video heh: http://www.youtube.com/watch?v=bBlAVeVFWFMDaniel Larimar incorrectly claims in that video that it is not reliable to validate transactions in parallel multithreaded. Nonsense. Only if the inputs to a transaction fail to validate would one need to potentially check if some other transactions need to be ordered in front of it, or check if it is a double-spend. And he incorrectly implies that the only way to get high TX/s is to eliminate storing UXTO on disk, because presumably he hasn't conceived of using SSD and/or RAID and/or node partitioning. It is impossible to keep the entire world's UXTO in RAM given 36 bytes of storage for each 256-bit output address+value, given even 1 billion users and several addresses per user. He mentions using indices instead of hashes, but enforcing such determinism over a network makes it extremely brittle (numerous ways such can fail and having addresses assigned by the block chain violates user autonomy and the end-to-end principle) as well even 64-bit hashes are subject to collisions at billion-scale. Essentially he is making the error of optimizing at the low-level while breaking higher-level semantics, because he apparently hasn't gone about the way to really scale and solve the problem at the high-level semantically. Edit: Fuseleer applies the term "vertical scaling" to describe Bitshare's optimization strategy.
|
|
|
|
TPTB_need_war
|
|
November 06, 2015, 02:48:26 PM |
|
Hmmm...Ive found that the major bottlenecks on lower end stuff is actually the IO DB writes/reads and not so much crypto related stuff. Sure it has a positive effect if you can speed it up, but a good 70%+ of optimizing I do is how to get data over the IO quicker and more efficiently.
What DB system do you use? MySQL? I use http://docs.oracle.com/javase/8/docs/api/java/nio/MappedByteBuffer.html. I have just recalled that Emunie does much more than just payments, in this case we cannot compare our solutions, because our cryptocurrency works with payments only and doesn't need to do sophisticated stuff like order matching. MySQL and Derby for development, probably go with Derby or H2 for V1.0. The data stores themselves are abstracted though, so any DB solution can sit behind them with minor work so long as they implement the basic interface. That solution for you (if it fits your purpose) will be very fast, then your IO bottleneck will mainly shift to network I imagine? Both of these methods are horridly inefficient. Cripes disk space is not at a premium. Duh!
|
|
|
|
Come-from-Beyond
Legendary
Offline
Activity: 2142
Merit: 1010
Newbie
|
|
November 06, 2015, 05:53:36 PM |
|
Daniel Larimar incorrectly claims in that video that it is not reliable to validate transactions in parallel multithreaded. Nonsense. Why nonsense, it depends on linearity of the system. For a linear system order doesn't matter, for a non-linear one it does. PS: We assume that multithreaded execution can't ensure a specific order of events, which is pretty reasonable for current architectures without placing a lot of memory barriers which would degrade the performance significantly.
|
|
|
|
nexern
|
|
November 06, 2015, 09:09:29 PM |
|
@TPTB_need_war yes, agree. datasinks like mysql etc. doesn't provide predictable query performance under real world conditions. there are reasons ssd optimized stuff liḱe this exists: http://www.aerospike.com/performance/reagrding threading and trade offs, just a matter of tools. still waiting for the first crypto project written with an erlang/mnesia combo. spawn a process costs you 1µs and using mnesia as distributed datasink means you are working within the same memory space as your application, as a result, fetching data objects is a matter of a few microseconds, all proven and robust due to erlangs memory protection at VM level. however, curious to see the final storage setup, delivering those tps, doubt it will be mysql or similar rdbms
|
|
|
|
TPTB_need_war
|
|
November 06, 2015, 10:16:46 PM Last edit: November 06, 2015, 11:09:45 PM by TPTB_need_war |
|
That solution for you (if it fits your purpose) will be very fast, then your IO bottleneck with shift to network I imagine?
Network will become a bottleneck at 12'000 TPS (for 100 Mbps). Yup, partitions my friend, that problem goes away I expect that when you do finally issue a white paper, the weakness is going to be the economic model will be gameable such that there is either a loss of Consistency, Availability, or Partition tolerance (CAP theorem). Because without a proof-of-work (or proof-of-share[1]) block chain, there is no objective SPOT (single-point-of-truth), which really becomes onerous once partitioning is added to the design because afaics there is then no way to unify the partitioned perspectives. I believe this to be the analogous underlying flaw of Iota and "block list". Challenge with proving this flaw for Iota et al, is to show a game theory that defeats the assumptions of the developers (white paper), e.g. selfish mining game theory against Satoshi's proof-of-work. However, I have argued in Iota's thread that this onus is on them to prove their design doesn't have such a game theory. Otherwise you all can put these designs into the wild and then we can wait to see if they blow up at scale. Note I haven't had enough time to follow up on Iota lately, and I am waiting for them to get all their final analysis into a final white paper, before I sit down and really try to break it beyond just expressing theoretical concerns/doubts. [1] In PoS the entropy is bounded and thus in theory it should be possible to game the ordering. In theory, there should be a game theory such that the rich always get richer, paralleling the 25 - 33% share selfish mining attack on Satoshi's proof-of-work. However, it is not yet shown how this is always/often a practical issue. Proof-of-share can't distribute shares of the money supply to those who do not already have some of the money supply. Proof-of-share is thus not a debasement power-law flattening (recycling) distribution compatible scheme, although neither is proof-of-work once it is dominated by ASICs. Without recycling of the power-law distribution, velocity-of-money suffers unless debt-driven booms are employed and then government becomes a political expediency to "redistribute from the rich to the poor" (which is then gamed by the rich and periodic class/world warfare). Proof-of-share suffers from conflating coin ownership with mining, thus if not all coin owners are equally incentivized to participate in mining, then the rich control the mining. A coin owner with a holding that is only worth less than his toenail, isn't going to bother with using his share to mine. Thus proof-of-share is very incompatible with the direction towards micro-transactions and micro-shares. Any attempt to correct this by weighting smaller shares more, can then be gamed by the rich who can split their shares into micro-shares. Ideally debasement should be distributed to an asset that users control but the rich can't profitably obtain. You can't just make a claim out-of-context that an "honest" majority of the trust reputation will decide the winner of a double-spend. You have to model the state machine holistically before you can make any claim.
Proof-of-work eliminates that requirement because each new iteration of a block solution is independent (trials, often simplistically modeled as a Poisson distribution) from the prior one (except to some small extent in selfish mining which is also easily modeled with a few equations). See the selfish-mining paper for the state machine and then imagine how complex the model for his design will be.
This is independence is what I mean when I say the entropy of PoW is open (unbounded), while it is closed for PoS.[1]
|
|
|
|
Come-from-Beyond
Legendary
Offline
Activity: 2142
Merit: 1010
Newbie
|
|
November 06, 2015, 10:21:45 PM |
|
I expect that when you do finally issue a white paper, the weakness is going to be the economic model will be gameable such that there is either a loss of Consistency, Availability, or Partition tolerance.
I believe Availability will always be nine nines for any decentralized cryptocurrency and Consistency will always be eventual, so Partition tolerance is the only toy we all can play with.
|
|
|
|
TPTB_need_war
|
|
November 06, 2015, 10:57:30 PM |
|
Daniel Larimar incorrectly claims in that video that it is not reliable to validate transactions in parallel multithreaded. Nonsense. Why nonsense, it depends on linearity of the system. For a linear system order doesn't matter, for a non-linear one it does. PS: We assume that multithreaded execution can't ensure a specific order of events, which is pretty reasonable for current architectures without placing a lot of memory barriers which would degrade the performance significantly. Because (as indicated/implied by my prior post) it is more sane to design your system holistically such that ordering of transactions is an exceptional event, and not a routine one. Conflation of "order book" with TX/s is a category error. It is not even clear if a decentralized "order book" can or should have a deterministic ordering, because determinism may allow the market to be gamed. In any case, it is not relevant to the issue of rate of processing TX/s for signed transactions. Separation-of-concerns is a fundamental principle of engineering.
|
|
|
|
TPTB_need_war
|
|
November 06, 2015, 11:05:16 PM |
|
I expect that when you do finally issue a white paper, the weakness is going to be the economic model will be gameable such that there is either a loss of Consistency, Availability, or Partition tolerance.
I believe Availability will always be nine nines for any decentralized cryptocurrency and Consistency will always be eventual, so Partition tolerance is the only toy we all can play with. I have already argued to you in your Iota thread that your definition of Availability has no relevant meaning (propagation to across the peer network is not a semantic outcome). Rather a meaningful Availability is the ability to put your transaction into the concensus. In Bitcoin, that availability is limited in several ways: - Confirmation is only every 10 minutes.
- Inclusion is in a block is dependent on the whims of the node which won the block, and on the maximum block size.
- One who has sufficient hashrate power, has higher availability.
- 51% of the network hashrate power can blacklist your Availability.
It is my stance, that the holistic game theory analysis of Availability in Iota, eMunie, and "block list" is much more muddled thus far. The multifurcated tree of Iota appears to be multiple (potentially inConsistent) Partitions, so Availability to create a new tree branch doesn't appear to be meaningful Availability since there is no confirmation of consensus.
|
|
|
|
Come-from-Beyond
Legendary
Offline
Activity: 2142
Merit: 1010
Newbie
|
|
November 07, 2015, 12:24:14 AM |
|
I have already argued to you...
Aye, we stuck because I was too conservative in definitions.
|
|
|
|
TPTB_need_war
|
|
November 07, 2015, 12:33:43 AM |
|
I have already argued to you...
Aye, we stuck because I was too conservative in definitions. That is why I will just wait for the dust to settle, before and if I attempt a more quantitative argument. Or perhaps the guys who found the selfish mining flaw in Bitcoin will endeavor to analyze these new consensus designs once they have been more finalized. It is an inefficient use of time to chase a moving target where I have the disadvantage of not having first access to the information that is in your head(s). I'll wait for your publishing.
|
|
|
|
|