Bitcoin Forum

Alternate cryptocurrencies => Altcoin Discussion => Topic started by: GingerAle on July 31, 2015, 01:48:40 PM



Title: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on July 31, 2015, 01:48:40 PM
I grow weary of the constant trolling, inverse trolling, talk of "investing", nonsense etc. This is cool software / technology that happens to be explicitly tied to money.

This thread is about Monero technical ideas, discussion, ONLY. If you barge in here with something non-technical, I will delete it. This is not a support thread. I will point you in the right direction, then delete your post.

Yes, you might say "there's a monero specific forum at forum.getmonero.org". Yes there is. People don't use it for their own reasons.

My first post will be a copy of the discussion smooth and I were having regarding trying to figure out how to automodify the minimum fees. I hope that this cleaner thread will spur more actual discussion as opposed to the neverending "woh look market cap! hey I have a scam detector! who wants some spam?"



Title: Re: [XMR] Monero Technical Discussion
Post by: GingerAle on July 31, 2015, 01:48:51 PM
Well, now that thats all over, lets talk about something fun!!!

I've been trying to wrap my head around how to convert the fee calculation to something dynamic that wouldn't require human intervention.

As far as I understand, the primary reason the fees exist is to prevent blockchain spamming (and mining incentive). So Monero has some minimum fee thats based on the size of the transaction. I'm sure you've noticed how this works if you've tried to clean up some dust using Moneromoo's recent simplewallet enhancements (2.3 XMR to cleanup 5xmr of dust wuuuuut)

Currently, the fee is 0.01 xmr / kb as writ here: https://forum.getmonero.org/1/news-and-announcements/91/monday-monero-missives-17-november-2nd-2014

Now this is all fine and dandy except when Monero shoots to the moon (1 xmr = $1000), because an increase in price doesn't mean the size of the transactions will decrease. A transaction to send 0.045 XMR will use the same space (well, probably more depending on mixin availability) as a transaction to send 45 XMR (more or less).

Obviously, the easiest solution is to modify the code by hand (the devs intervene), and there will be plenty of incentive to do so - the cost of transacting will decrease the utility of Monero - the coin has become worth a lot, but it is really expensive to use, so people just hold it. But when humans intervene, you end up with all the nonsense associated with humans. Miners might not want to switch because they're raking in profits. The monero-as-asset crowd will rally for not changing the protocol because transaction friction could actually increase the value of an asset ( perhaps ).

All-in-all, the modifying-the-fee-using-human-intervention is the antithesis of cryptocurrency. Again, the goal of cryptocurrency is to remove as much of human element as possible.

So, how could we do that?

I don't think its possible to somehow link the protocol to some type of oracle - i.e., the protocol somehow gets data input from some external source. For example, you could imagine the protocol gets a feed from some XMR / fiat exchange. I don't think this is possible primarily because it depends on factors exterior to the protocol.

I think we would have to stick with data that exists within the protocol / blockchain. Specifically, I think these are the data we can use:

t = Transaction frequency = number of transactions per block
a = transaction amount = the amount of XMR actually transacted in a given transaction


Using these data, I think something could be written that would achieve goal of increasing the XMR cost of a transaction when fiat value is low, and decreasing the XMR cost of a transaction when fiat value is high.

Essentially, when the transaction frequency is low and the transaction amount is high, one could infer that the fiat value of XMR is low.

Conversely, when the transaction frequency is high and the transaction amount is low, one could infer that the fiat value of XMR is high.


Obviously, some sort of rolling window would be used to aggregate an average of these data.

I've been trying to wrap my head around how to convert the fee calculation to something dynamic that wouldn't require human intervention.

This is a useful and interesting effort but keep in mind there are at least four different tx fees:

1. The fee required for relaying by nodes.

2. The default fee set in the code for miners to include a transaction.

3. The default fee set in the code for those sending a transaction.

4. The actual fee at which miners include transactions in practice.

The first is a network function that prevents spamming the p2p network as a DoS attack. It should not really be much less than #4 because that allows spamming the p2p without any real cost (the spammer wouldn't actually pay the fee because the tx wouldn't get mined). It can't be changed (either higher or lower) by individual nodes because if they violate the rule expected by other nodes they will get disconnected.

The second is a default in the code and miners can change it if they want (currently this would require recompiling the source code, but other options may be added later), though setting it lower than #1 would not result in any more transactions being mined since they won't be sent over the p2p UNLESS the miner gets these transactions via another channel, or creates them.

The third is a default in the code and users could change it if they want (currently this would require recompiling the source code, but other options may be added later). If this is set lower than #1, the transaction will be rejected by nodes.

The fourth is entirely a market phenomenon that is a balance between supply, demand, and the fee policies set by users and miners.


EDIT: fixed typo "much much" -> "much less"

I've been trying to wrap my head around how to convert the fee calculation to something dynamic that wouldn't require human intervention.

This is a useful and interesting effort but keep in mind there are at least four different tx fees:

1. The fee required for relaying by nodes.

2. The default fee set in the code for miners to include a transaction.

3. The default fee set in the code for those sending a transaction.

4. The actual fee at which miners include transactions in practice.

The first is a network function that prevents spamming the p2p network as a DoS attack. It should not really be much much than #4 because that allows spamming the p2p without any real cost (the spammer wouldn't actually pay the fee because the tx wouldn't get mined). It can't be changed (either higher or lower) by individual nodes because if they violate the rule expected by other nodes they will get disconnected.

The second is a default in the code and miners can change it if they want (currently this would require recompiling the source code, but other options may be added later), though setting it lower than #1 would not result in any more transactions being mined since they won't be sent over the p2p UNLESS the miner gets these transactions via another channel, or creates them.

The third is a default in the code and users could change it if they want (currently this would require recompiling the source code, but other options may be added later). If this is set lower than #1, the transaction will be rejected by nodes.

The fourth is entirely a market phenomenon that is a balance between supply, demand, and the fee policies set by users and miners.



So the fee that I'm talking about would be #1 - the fee required for relay by nodes. I'm not sure I'm parsing what you write in the above bold -  so the spammer could spam just by bloating the mempool? I think the confusing part is the mis-type of "much much", which could be "much less" or "much more", and this is a key piece of data.

I'm assuming its much less.

blue text = but couldn't each individual node recalculate the minimum value based on data on the blockchain? So then all nodes will be in agreement, because they're all making the same calculation off the same blockchain.

If I'm reading you right, the primary protocol-induced fee is #1. Is this fee what I understand as the per-kb fee?

So the fee that I'm talking about would be #1 - the fee required for relay by nodes. I'm not sure I'm parsing what you write in the above bold -  so the spammer could spam just by bloating the mempool? I think the confusing part is the mis-type of "much much", which could be "much less" or "much more", and this is a key piece of data.

Yes exactly, not only the mempool but bandwidth used by every node to relay the transactions.

I'll go back and fix the typo but the idea is that if you broadcast a transaction it should have a significant chance of actually being mined.

Quote
If I'm reading you right, the primary protocol-induced fee is #1. Is this fee what I understand as the per-kb fee?

I'm not sure I would agree that one or another is "primary" protocol-induced. They are all important to the protocol (the process of mining, for example, is certainly part of the protocol), but they are different and interrelated. They're all per-kb in the current implementation.

So the fee that I'm talking about would be #1 - the fee required for relay by nodes. I'm not sure I'm parsing what you write in the above bold -  so the spammer could spam just by bloating the mempool? I think the confusing part is the mis-type of "much much", which could be "much less" or "much more", and this is a key piece of data.

Yes exactly, not only the mempool but bandwidth used by every node to relay the transactions.

I'll go back and fix the typo but the idea is that if you broadcast a transaction it should have a significant chance of actually being mined.

Quote
If I'm reading you right, the primary protocol-induced fee is #1. Is this fee what I understand as the per-kb fee?

I'm not sure I would agree that one or another is "primary" protocol-induced. They are all important to the protocol (the process of mining, for example, is certainly part of the protocol), but they are different and interrelated. They're all per-kb in the current implementation.

So the fee that I'm talking about would be #1 - the fee required for relay by nodes. I'm not sure I'm parsing what you write in the above bold -  so the spammer could spam just by bloating the mempool? I think the confusing part is the mis-type of "much much", which could be "much less" or "much more", and this is a key piece of data.

Yes exactly, not only the mempool but bandwidth used by every node to relay the transactions.

I'll go back and fix the typo but the idea is that if you broadcast a transaction it should have a significant chance of actually being mined.

Quote
If I'm reading you right, the primary protocol-induced fee is #1. Is this fee what I understand as the per-kb fee?

I'm not sure I would agree that one or another is "primary" protocol-induced. They are all important to the protocol (the process of mining, for example, is certainly part of the protocol), but they are different and interrelated. They're all per-kb in the current implementation.

gotcha gotcha. I think I'm picking up what you're putting down.

Basically, you're saying that if some kind of algorithmic floating number were to be implemented, we would have to find a way to harmonize it across those 4 different elements. Well, really, elements 1,2, and 3. What miners decide ( #4) has to be >= 1,2,3 .

And changing the fees would require recompiling the source code.

I would imagine it would work something like this:

When a block is mined, a new piece of data (activity) is stored in the header. This piece of data (calculated by the miner) is something like 

activity = (sum of number of transactions in n recent blocks) X (total amount in transactions in n recent blocks) / (n recent blocks)

We'll have to figure out exactly what the function X is.

When new block is made, every miner can validate that activity value was properly calculated.

(1) When determining whether to propagate a transaction, the daemon can use the most recent activity value in its calculation of minimum fee.

(2) When determining whether to include a transaction, the mining code can use the value in its calculation

(3) When making a transaction, simplewallet can use the activity value to make its calculation of minimum fee.

Sure, we've added more data to each block, but I think Monero is fond of the notion of increased protocol utility vs. concerns over blockchain size. And yes, Monero is sentient and can be fond of things.

Hrm, you would also have to make the floating minimum a range, not a fixed value.

Ff the precisely calculated minimum is 0.01211 xmr / kb, and by the time your transaction is made and sent to the network, the new precisely calculated minimum is 0.01213 xmr / kb, that won't work.

instead, you would have some range around the precise minimum. This range itself would have to be dynamic, but that could be easy... just use the rate of the delta for the minimum. I.e., if over n blocks the minimum moved 0.0004 xmr, then thats the range. (or something like this)


Title: Re: [XMR] Monero Technical Discussion
Post by: GingerAle on July 31, 2015, 03:51:22 PM
Lots of posts deleted because they were not technical.


Title: Re: [XMR] Monero Technical Discussion
Post by: luigi1111 on July 31, 2015, 03:57:46 PM
Lots of posts deleted because they were not technical.

So pretty. I haven't any useful input on fees.

What do people think about serialized stealth payment IDs versus unique spend keys with single viewkey?

Edit: try to keep up! :D

Edit2: maybe you should change title to "[XMR] Monero Improvement Technical Discussion" or something.


Title: Re: [XMR] Monero Technical Discussion
Post by: GingerAle on July 31, 2015, 04:28:27 PM
Lots of posts deleted because they were not technical.

So pretty. I haven't any useful input on fees.

What do people think about serialized stealth payment IDs versus unique spend keys with single viewkey?

Edit: try to keep up! :D

Edit2: maybe you should change title to "[XMR] Monero Improvement Technical Discussion" or something.

I don't understand .....'

good idea on the thread title change.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: kazuki49 on July 31, 2015, 04:32:36 PM
Obligatory reading, how Monero identified some weakness and proposed changes to betterment of the cryptonote protocol:

https://lab.getmonero.org/pubs/MRL-0001.pdf
https://lab.getmonero.org/pubs/MRL-0002.pdf
https://lab.getmonero.org/pubs/MRL-0003.pdf
https://lab.getmonero.org/pubs/MRL-0004.pdf


Title: Re: [XMR] Monero Technical Discussion
Post by: luigi1111 on July 31, 2015, 04:41:52 PM
Lots of posts deleted because they were not technical.

So pretty. I haven't any useful input on fees.

What do people think about serialized stealth payment IDs versus unique spend keys with single viewkey?

Edit: try to keep up! :D

Edit2: maybe you should change title to "[XMR] Monero Improvement Technical Discussion" or something.

I don't understand .....'

good idea on the thread title change.

The new Bytecoin "breakthrough", where they're "depreciating" payment IDs. It is a potential solution, and I'd be really interested in seeing an actual cost analysis instead of generalities.


Title: Re: [XMR] Monero Technical Discussion
Post by: smooth on July 31, 2015, 05:10:28 PM
Lots of posts deleted because they were not technical.

So pretty. I haven't any useful input on fees.

What do people think about serialized stealth payment IDs versus unique spend keys with single viewkey?

Edit: try to keep up! :D

Edit2: maybe you should change title to "[XMR] Monero Improvement Technical Discussion" or something.

I don't understand .....'

good idea on the thread title change.

The new Bytecoin "breakthrough", where they're "depreciating" payment IDs. It is a potential solution, and I'd be really interested in seeing an actual cost analysis instead of generalities.

The idea is to have one view key with multiple spend keys. That gives you multiple public keys (addresses) where half is common and half is unique. This allows scanning for all of these addresses much more quickly than scanning for completely different addresses (for the use case of an exchange or web wallet where each user has his own address -- instead of the current practice of one exchange address plus payment IDs).





Title: Re: [XMR] Monero Technical Discussion
Post by: MalMen on July 31, 2015, 05:20:36 PM
The idea is to have one view key with multiple spend keys. That gives you multiple public keys (addresses) where half is common and half is unique. This allows scanning for all of these addresses much more quickly than scanning for completely different addresses (for the use case of an exchange or web wallet where each user has his own address -- instead of the current practice of one exchange address plus payment IDs).

So one viewkey for multiple spendkeys, is that what bytecoin is doing ? I am more favorable to this option rather than payment_id


Title: Re: [XMR] Monero Technical Discussion
Post by: luigi1111 on July 31, 2015, 05:46:40 PM
Lots of posts deleted because they were not technical.

So pretty. I haven't any useful input on fees.

What do people think about serialized stealth payment IDs versus unique spend keys with single viewkey?

Edit: try to keep up! :D

Edit2: maybe you should change title to "[XMR] Monero Improvement Technical Discussion" or something.

I don't understand .....'

good idea on the thread title change.

The new Bytecoin "breakthrough", where they're "depreciating" payment IDs. It is a potential solution, and I'd be really interested in seeing an actual cost analysis instead of generalities.

The idea is to have one view key with multiple spend keys. That gives you multiple public keys (addresses) where half is common and half is unique. This allows scanning for all of these addresses much more quickly than scanning for completely different addresses (for the use case of an exchange or web wallet where each user has his own address -- instead of the current practice of one exchange address plus payment IDs).



Yes indeed. Sorry, I didn't really mean to come across as snarky (not sure if I did).

Neither of these change any consensus rules in any way, they're just attempts to recommend standards for recipients to differentiate payments.



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on July 31, 2015, 05:52:26 PM
Removing support for payment IDs entirely could be a consensus change. I have no idea if they intend that.

The transaction format could be made a lot cleaner if tx_extra were remove and a fixed size transaction-key field were added instead. Plus it would remove one obvious way for scumbags to stuff kiddie porn on the blockchain.



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: fluffypony on July 31, 2015, 06:49:23 PM
Removing support for payment IDs entirely could be a consensus change. I have no idea if they intend that.

The transaction format could be made a lot cleaner if tx_extra were remove and a fixed size transaction-key field were added instead. Plus it would remove one obvious way for scumbags to stuff kiddie porn on the blockchain.

I would 100% support dropping tx_extra *if* we have MoneroAsset etc. duaghter-chains to deal with the metadata normally stuffed into mainchain. Having a separate optional output (or tx) identifier is a good replacement.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on August 01, 2015, 03:04:59 AM
Removing support for payment IDs entirely could be a consensus change. I have no idea if they intend that.

The transaction format could be made a lot cleaner if tx_extra were remove and a fixed size transaction-key field were added instead. Plus it would remove one obvious way for scumbags to stuff kiddie porn on the blockchain.

I would 100% support dropping tx_extra *if* we have MoneroAsset etc. duaghter-chains to deal with the metadata normally stuffed into mainchain. Having a separate optional output (or tx) identifier is a good replacement.

I would support the development of a MoneroAsset daughter chain, especially if it enabled refining the primary currency chain. Removing the ability to use the primary currency chain as more than what its supposed to be - accounting - would be very beneficial, IMO. Especially because Monero already has to deal with an inherently larger blockchain with interesting hurdles for pruning. Having separate chains would also make it explicitly simpler to differentiate between the everyday user (someone who runs a currency-only node) vs. enthusiasts or vested interests running multiple chains. Of course, incentivizing maintenance of the daughter chain is a different beast, unless I'm not understanding daughter chains properly.

Yeah, generally, I feel that trying to multipurpose the ledger is something to be avoided - especially in the case of Monero where the data is cryptographically obscured. I don't think every blockchain should be viewed as an oppurtunity for data storage - "metadata normally stuffed into the mainchain". The primary currency chain should be just that - a ledger where it is indicated that I sent n things to someone else. Keep it simple stupid.

and yes, preventing kiddie porn on the blockchain is always a good thing.

damnit I wish I knew how to code so I could get the data from the blockchain to try and come up with a formula for the floating minimum fee adjustment proposal outlined above. Gah. It'd be fun to make pretty graphs.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on August 03, 2015, 11:04:01 AM
I wonder if there's a way to make morphing POW code, such that every n years the POW function gets modified.

This would prevent mining centralization, and do so in a way that avoids politics, because changing the POW would definitely cause an uproar from the mining camp.

In the case that ASICS *are* developed for Monero, this would disrupt that significantly.
If the only thing that exists in 10 years are GPU operations, then it would only disrupt it in terms of software development.
if the only thing is CPU mining, then all it would do is cement further CPU mining.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: poloskarted on August 03, 2015, 11:36:09 AM
I wonder if there's a way to make morphing POW code, such that every n years the POW function gets modified.

This would prevent mining centralization, and do so in a way that avoids politics, because changing the POW would definitely cause an uproar from the mining camp.

In the case that ASICS *are* developed for Monero, this would disrupt that significantly.
If the only thing that exists in 10 years are GPU operations, then it would only disrupt it in terms of software development.
if the only thing is CPU mining, then all it would do is cement further CPU mining.


The following approaches, being IO bound with a changing dataset achieve "some" extra level of polymorphism and although the PoW as it stands has strong merits for ASIC resistance, IMO they improve upon it somewhat.


http://boolberry.com/files/Block_Chain_Based_Proof_of_Work.pdf
https://github.com/ethereum/wiki/wiki/Ethash

For instance, in the case of the blockchain PoW (wild keccak) faster verification resulting in faster syncing. I believe last year someone conducted a test in an infographic for initial blockchain download and XMR was 200+ minutes, whereas BBR came in at about 13 minutes.  I'm trying to find where it was posted.

 I can't conceive a situation at the cryptonote protocol level where you could randomly switch algo's outright though. Voting through forks would be inelegant and cause much disruption.  Much like modifiying the emission curve, even if it was practical on a technical level, it would seem that it would far from ideal from an econ standpoint. You can't just switch what people signed up for retroactively.

  Also, you cannot ever stop the tendency to drift towards mining centralization- even the classic notion of 1 CPU, 1 Vote can be abused by sysadmins, botnet owners etc, AWS cowboys. I know this well as I was mining BTC before there were optimized CPU miners- and no GPU miners nor pools, and it was not even called mining, but "Generating'. It was always centralized to some degree.  With the rolling snowball effect of those who are profiting ploughing in more towards expansion you can only ever attempt to stave off specialized custom silicon.



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on August 03, 2015, 11:58:52 AM
I wonder if there's a way to make morphing POW code, such that every n years the POW function gets modified.

This would prevent mining centralization, and do so in a way that avoids politics, because changing the POW would definitely cause an uproar from the mining camp.

In the case that ASICS *are* developed for Monero, this would disrupt that significantly.
If the only thing that exists in 10 years are GPU operations, then it would only disrupt it in terms of software development.
if the only thing is CPU mining, then all it would do is cement further CPU mining.


The following approaches, being IO bound with a changing dataset achieve "some" extra level of polymorphism and although the PoW as it stands has strong merits for ASIC resistance, IMO they improve upon it somewhat.


http://boolberry.com/files/Block_Chain_Based_Proof_of_Work.pdf
https://github.com/ethereum/wiki/wiki/Ethash

For instance, in the case of the blockchain PoW (wild keccak) faster verification resulting in faster syncing. I believe last year someone conducted a test in an infographic for initial blockchain download and XMR was 200+ minutes, whereas BBR came in at about 13 minutes.  I'm trying to find where it was posted.

 I can't conceive a situation at the cryptonote protocol level where you could randomly switch algo's outright though. Voting through forks would be inelegant and cause much disruption.  Much like modifiying the emission curve, even if it was practical on a technical level, it would seem that it would far from ideal from an econ standpoint. You can't just switch what people signed up for retroactively.

  Also, you cannot ever stop the tendency to drift towards mining centralization- even the classic notion of 1 CPU, 1 Vote can be abused by sysadmins, botnet owners etc, AWS cowboys. I know this well as I was mining BTC before there were optimized CPU miners- and no GPU miners nor pools, and it was not even called mining, but "Generating'. It was always centralized to some degree.  With the rolling snowball effect of those who are profiting ploughing in more towards expansion you can only ever attempt to stave off specialized custom silicon.



Cool, thanks for those links - will read up on them and try to understand them!!

re: the first bold, I agree, which is why these things would need to be done before things get too cemented. Furthermore, one of the pillars of Monero is decentralization - so I tend to believe that efforts would be made to maintain decentralization even at the cost of switching things as fundamental as the POW.

re: second bold - I agree to a degree ( snarf ), but its the "some degree" that matters, IMO. At the existing difficulty, it is still possible for your average PC consumer to mine monero, even though there are probably definitely botnets, AWS cowboys, and sysadmins already on the network. This is *not* the case with bitcoin and soon to be litecoin with the scrypt asic development.

re: third bold: e x a c t l y. thats what a theoretical polymorphism would do. A guarantted 10 year window where its guaranteed that the only thing that will work is a CPU.... granted, its impossible to know CPU architecture in the future.... but, we got math.

and there will soon be a missive detailing your point re: voting through forks, if I understood correctly.



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: Hueristic on August 10, 2015, 03:33:20 PM
Removing support for payment IDs entirely could be a consensus change. I have no idea if they intend that.

The transaction format could be made a lot cleaner if tx_extra were remove and a fixed size transaction-key field were added instead. Plus it would remove one obvious way for scumbags to stuff kiddie porn on the blockchain.



Shit, only skimmed this thread but this popped out at me. Is there that shit on the chain right now or are you warning of a possibility? Is anyone with the chain getting setup? To remove would take a hard fork prune?

How about adding a option to boycott specific transaction meta data and maintaining a blacklist? Is there another place where this is being discussed?


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on August 10, 2015, 04:03:17 PM
Removing support for payment IDs entirely could be a consensus change. I have no idea if they intend that.

The transaction format could be made a lot cleaner if tx_extra were remove and a fixed size transaction-key field were added instead. Plus it would remove one obvious way for scumbags to stuff kiddie porn on the blockchain.



Shit, only skimmed this thread but this popped out at me. Is there that shit on the chain right now or are you warning of a possibility? Is anyone with the chain getting setup? To remove would take a hard fork prune?

How about adding a option to boycott specific transaction meta data and maintaining a blacklist? Is there another place where this is being discussed?

It's been discussion countless times on this forum and elsewhere in the context of Bitcoin (which indeed does have some nasty stuff on its blockchain) and possibly other coins. There isn't really a good solution, but there are less bad solutions.




Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on August 10, 2015, 04:34:00 PM
RE: POW mod.

It seems that cryptonight is fighting the good fight re: maintaining CPU dominance with few benefits of GPU (besides hardware configuration scalability). From what I understand, it does this by being "memory-hard" - i.e., requiring fast access to memory to be efficient. Today's CPUs are designed with fast access memory built in (the L3 cache), wherein GPUs have high bandwidth memory but not necessarily fast memory (random read/write)

I know aeon recently modded cryptonight to a lower memory requirement (1 Mb per instance).

I wonder if the algorithm could be modified so this parameter is slowly adaptive. While its impossible to know how hardware will evolve, the general architecture evolution has been to get faster memory closer to the CPU (apparently new chips have L4 cache).

I.e., whatever factor causes the algorithm to require 2 mb (or 1 in the case of aeon), could be modified to increase over time.... talking on the scale of decades, wherein over 10 years the memory requirement would go from 2 MB to 3 MB.. of course, to be dated using blocktime.  Or maybe it should be exponential to follow the actual evolution of CPU cache.

http://images.anandtech.com/reviews/cpu/intel/nehalem/part3/totalcache.png


So what would this do? On one hand, it wouldn't do much besides make outdated hardware useless. In a world without the adaptive tech, people with old hardware would get less of the network hashrate, because newer tech has larger / faster cache. In a world with adaptive tech, people with old hardware get less of the network hashrate, because the algorithm has changed to become harder. So, I think things are equal on this front.

What it does do is prohibit useful development of asics. And asics that are developed have a limited useful shelf life. Perhaps I will compare cryptonite light with standard to see where that parameter is.

Of course, this ties the POW to a function of time, which could get tricky. But I think that would be the easiest implementation of an adaptive system. Otherwise, we would have to make the protocol "sentient" in some fashion, and these could all be gamed - e.g., the protocol "knows" how frequently particular miners are being rewarded, and infers centralization.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on August 10, 2015, 05:23:05 PM
The actual cache size is less important than cache per core. That seems to have been fairly constant over a several year period (i.e. more cores and more cache both over time). How far into the future that will continue ???
 
Whether L4 is worth using for this is unclear. It is about half the speed of L3. That probably translates into lower hash/watt, so kinda bad.

An algorithm with a much bigger memory footprint, like cuckoo, without a corresponding increase in verification time, might be sized for L4 though.



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on August 20, 2015, 06:16:57 AM
I am trying to recall and regenerate some of my thought processes from my 2013 research about memory-hard hash function designs.

The basic idea of Scrypt is to one-way hash a value and write the hash output to a memory location, hash that output  write the hash output to the next memory location, and repeat until all the memory you want to use is written. Then random access read from memory locations using the value read to determine the next memory location to read. It is hoped that your algorithm becomes random access latency bound. However the problem is a GPU or ASIC can trade computation for memory size by only storing every Nth hash output and then calculating on the fly the other hash values as they are accessed. So then the algorithm becomes more compute bound than latency bound. However that doesn't necessarily mean hashes/watt is superior with the compute bound version of the algorithm. It depends on the relative power inefficiencies of the smaller memory versus the increased computation. Also if hashes/capital cost is a factor in the return on investment, then hashrate versus cost of the silicon hardware for two algorithms has to be compared and noting that smaller caches typically have nearly an order-of-magnitude lower random access latencies.

I suppose the ASICs and GPUs mining Litecoin proved that the factors favored the compute bound algorithm, at least for the Scrypt parameters Litecoin uses.

The next attempt at an improvement over Scrypt was (which I wrote down in 2013 well before the Cryptonite proof-of-work hash was published) was to not just read pseudo-randomly as described above, but to write new pseudo-random values to each pseudo-randomly accessed location. This defeats storing only every Nth random access location. However it is still possible to trade latency for computation by only storing the location that was accessed to compute the value to store instead of storing the entire value. The compute bound version of the algorithm can not be defeated by reading numerous locations before storing a value because the starting seed need only be stored.

Thus I concluded it is always possible to trade computation for latency in any memory-hard hash function.

ASICs are usually orders-of-magnitude more efficient at preordained computation, thus they will probably always be able to use a smaller cache than is used by the memory-hard CPU algorithm.

Even if the ASIC did not trade more computation for less latency, the CPU will be using more electricity to do the computational parts of the latency bound algorithm such as the one-way hash functions or how ever the data is manipulated before each write to make it pseudo-random. Even though the CPU can mask the time required to do this computation by running two threads per cache thus doing computation during the latency of the cache accesses of the partner thread, the CPU can not make the electricity cost of the computation disappear. And the ASIC are usually orders-of-magnitude more efficient at preordained computation, because the CPU has all this general computation logic such as generalized pipelines, the L1 and L2 caches (with their 8-way set associativity overhead!), and lots of other transistors that are not off when not being used.

Thus as per Litecoin's experience, I do not expect any derivative of a memory-hard hash function, including Cryptonite, to be immune from radical speedups and efficiency improvenments due to well designed ASICs. This of course won't happen until a Cryptonite coin has a very large market cap.

There is another issue. Using large caches such as 1 or 2MB and smaller data words, e.g. 256-bit hashes, and then the large number of random access reads and writes necessary to attempt to defeat compute bound algorithms, means the hash can become very slow, perhaps as slow as 10 or less hashes per second. This introduces an additional problem because the pool will expend nearly as much resources verifying mining share hashes as the miner will expend producing them. In 2013, I had attempted to resolve this side issue by successfully designing a 4096-bit hash based on the concepts of Bernstein's ChaCha hash. Any way, I abandoned it all because of the bolded statement above.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on August 20, 2015, 07:20:15 AM
ASICs are usually orders-of-magnitude more efficient at preordained computation, thus they will probably always be able to use a smaller cache than is used by the memory-hard CPU algorithm.

This sort of general statement is too broad to be useful. Most of the work being done in cryptonight is already done by dedicated hardware (AES round or 64x64 multiply), which is very different from the experience with SHA256 or Scrypt, where the CPU implementation of the hashes uses many simple ALU operations. The inner loop of cryptonight is about a dozen instructions. By contrast SHA256 is roughly 3375 basic integer operations (https://bitcointalk.org/index.php?topic=7964.msg550288#msg550288). That's a huge difference in kind. I can't speak to Scrypt as I haven't studied it at all.

For example, in the case of AES ASICs that exist, they are not orders of magnitude faster than the in-CPU implementation and likely also aren't orders-of-magnitude better in terms of power-efficiency either (though the latter numbers are harder to come by and harder to even estimate in the case of the in-CPU implementation).

Intel reports (http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/haswell-cryptographic-performance-paper.pdf) 4.52 cycles per byte single threaded, which at 2 GHz clock comes to roughly 3.5 gigabits per second. That's in line with reported commercial AES ASICs such as the Helios FAST core (http://www.heliontech.com/aes.htm) (3 Gbps). Their Giga core is one order of magnitude faster (40 Gbps), but carries a heavy penalty in latency and gate count (sounds like extremely heavily pipelined to me, though that could conceivably be okay for mining).

But from that one order of magnitude you have to subtract the added costs of the more complex table lookup.

This is not to say that no TMTO is potentially possible but the assumption of orders-of-magnitude gains on the T side with which to make that tradeoff is questionable at best (and remember this doesn't get you to no memory, only to less memory). But as you say earlier in your post, the feasibility and efficiency payoff of this depends very much on the numbers. It is certainly possible that the most efficient way to implement any particular algorithm is with a simple lookup table, even if such a table is not strictly required.

I remain fairly unconvinced that cryptonight necessarily can't use the latency bound of a lookup table access (which is largely an irreducible quantity) and the inherent cost of such a table along with the significant costs of reducing it to limit the payoff from highly-parallel dedicated implementations. But that doesn't mean that it can either.

I understand your comments about associative caches vs a simple lookup table but it still isn't at all clear that large gains are possible. It seems most of the latency of larger caches just comes from them being larger. But then this too remains to be seen.

Quote
hash can become very slow, perhaps as slow as 10 or less hashes per second. This introduces an additional problem because the pool will expend nearly as much resources verifying mine share hashes as the miner will expend producing them

Believe it or not we saw one brain-dead, or more likely deliberate scam, attempt that had 0.1 (!) hashes/sec!

But in any case the pool expending more resources will never happen, regardless of the hash rate. In order to produce a valid share you have to hash many invalid ones, but only the valid share is sent to the pool. If you send invalid shares you will be banned. You can attempt to sybil attack but then the pool can respond by requiring some sort of "one time" registration fee or identify validation. These are likely desirable anyway as probably the only defenses against share withholding.

The bigger problem with super-slow hashes is verification by the network. Despite some early concerns (I guess maybe before the implementation was un-deoptimized), cryptonight doesn't seem too bad; it only requires about 20ms, which is roughly in line with how long it might take to physically deliver a block over the network. Networks vary of course. Two nodes in the same data center will be much faster than that. Across the world it might take longer even if the block is empty.






Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on August 20, 2015, 08:20:15 AM
ASICs are usually orders-of-magnitude more efficient at preordained computation, thus they will probably always be able to use a smaller cache than is used by the memory-hard CPU algorithm.

This sort of general statement is too broad to be useful. Most of the work being done in cryptonight is already done by dedicated hardware (AES round or 64x64 multiply), which is very different from the experience with SHA256 or Scrypt, where the CPU implementation of the hashes uses many simple ALU operations. The inner loop of cryptonight is about a dozen instructions. By contrast SHA256 is roughly 3375 basic integer operations (https://bitcointalk.org/index.php?topic=7964.msg550288#msg550288). That's a huge difference in kind. I can't speak to Scrypt as I haven't studied it at all.

Scrypt uses the very efficient Salsa (ChaCha is nearly identical) one-way hash from Bernstein (not SHA256) which has a very low count of ALU operations. Afaik, Litecoin (which uses Scrypt) has ASICs which I assume have displaced CPU and GPU mining. I haven't been following Litecoin since the ASICs were rumored to be released.

Even if you are using dedicated hardware, to some extent it must be diluting the benefit of doing so by accessing them through generalized pipelines, 3 levels of caches (L1, L2, L3), and all the glue logic to make all those parts of the CPU work for the general purpose use case of the CPU. It is very unlikely that you've been able to isolate such that you are only activating the same number of transistors that the ASIC would for the specialized silicon.

Thus I think the general statement is not too broad, but rather apropos until someone can convince me otherwise.

For example, in the case of AES ASICs that exist, they are not orders of magnitude faster than the in-CPU implementation and likely also aren't orders-of-magnitude better in terms of power-efficiency either (though the latter numbers are harder to come by and harder to even estimate in the case of the in-CPU implementation).

Intel reports (http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/haswell-cryptographic-performance-paper.pdf) 4.52 cycles per byte single threaded, which at 2 GHz clock comes to roughly 3.5 gigabits per second. That's in line with reported commercial AES ASICs such as the Helios FAST core (http://www.heliontech.com/aes.htm) (3 Gbps). Their Giga core is one order of magnitude faster (40 Gbps), but carries a heavy penalty in latency and gate count (sounds like extremely heavily pipelined to me, though that could conceivably be okay for mining).

Well yeah I've cited the similar references in my work. That is why I went with AES-NI for my CPU proof-of-work hash. And why did I not access caches, instead keep a tight loop that can be loaded once in microcode and not reengage the pipeline logic over and over, and try to minimize the number of other transistors I would be activating in the CPU. One can assume the CPU will highly optimized for doing encryption since OS's offer full disk encryption.

Afaics, complexity of Cryptonite's bag-O-all-tricks (AES-NI, 64x64, Scrypt-like cache latency) makes that less likely that extra generalized logic in the CPU won't be accessed, e.g. checking if a cache hit is in L1 and L2 before accessing L3 which an ASIC would not need to do.

But from that one order of magnitude you have to subtract the added costs of the more complex table lookup.

Not even that. Just run the ASIC with a full size cache but eliminate L1 and L2 and other generalized logic that is consuming electricity. Intel is working hard at improving the parts of the CPU that are powered down when not in use, but it is still not very fine grained and will never be the equivalent of just specializing all the logic in an ASIC.

We try to minimize the advantage the ASIC will have, but we can not be at parity with the ASIC.

This is not to say that no TMTO is potentially possible but the assumption of orders-of-magnitude gains on the T side with which to make that tradeoff is questionable at best (and remember this doesn't get you to no memory, only to less memory). But as you say earlier in your post, the feasibility and efficiency payoff of this depends very much on the numbers. It is certainly possible that the most efficient way to implement any particular algorithm is with a simple lookup table, even if such a table is not strictly required.

I think perhaps you are missing my point and perhaps because it had to be deduced from what I wrote in the prior post, i.e. I didn't state it explicitly.

In order to make it so it is not economical for the ASIC to convert the latency bound to compute bound, you end up making your "memory-hard" hash compute bound in the first place! That was a major epiphany.

Thus why waste the time pretending the memory-hard complexity is helping you. If anything it is likely hurting you. Just go directly to using AES-NI for the hash. Which is what I did in spades.

I remain fairly unconvinced that cryptonight necessarily can't use the latency bound of a lookup table access (which is largely an irreducible quantity) and the inherent cost of such a table along with the significant costs of reducing it to limit the payoff from highly-parallel dedicated implementations. But that doesn't mean that it can either.

If you are not doing the algorithm with serialized random access the way I described it as an improvement over Scrypt, then yes parallelism can eat your lunch.

And if you do it the way I described, then you end up making it compute bound in order to prevent an ASIC from making it compute bound. So no matter which way you go, you are going to be compute bound.

I understand your comments about associative caches vs a simple lookup table but it still isn't at all clear that large gains are possible. It seems most of the latency of larger caches just comes from them being larger. But then this too remains to be seen.

Afair, I believe my concern was on the extra transistors that are being activated with the ASIC won't need to. Again remember I am loading this from memory not even having looked at my old work today. Don't really have time to do an exhaustive reload of prior hash work today.

Quote
hash can become very slow, perhaps as slow as 10 or less hashes per second. This introduces an additional problem because the pool will expend nearly as much resources verifying mine share hashes as the miner will expend producing them

Believe it or not we saw one brain-dead, or more likely deliberate scam, attempt that had 0.1 (!) hashes/sec!

And there was MemoryCoin with afair an 8MB memory hard hash and with upwards of 1 second per hash.

But in any case the pool expending more resources will never happen, regardless of the hash rate. In order to produce a valid share you have to hash many invalid ones, but only the valid share is sent to the pool.

I am talking about the ratio. If the pool is doing 1/1000th of the work of the share miners, then the pool has to maybe charge a larger fee.

Any way that wasn't my major point. I am just pointing out that there is limit to how large you can make the memory cache and thus how slow the hash function.

The bigger problem with super-slow hashes is verification by the network. Despite some early concerns (I guess maybe before the implementation was un-deoptimized), cryptonight doesn't seem too bad; it only requires about 20ms, which is roughly in line with how long it might take to physically deliver a block over the network. Networks vary of course. Two nodes in the same data center will be much faster than that. Across the world it might take longer even if the block is empty.

Also theoretically the relative slowness of the hash verification could become a more severe problem in some imagined consensus network designs which attempted to eliminate pools and record all mining shares directly to the block chain.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on August 20, 2015, 08:45:42 AM
Scrypt uses the very efficient Salsa (ChaCha is nearly identical) one-way hash from Bernstein (not SHA256) which has a very low count of ALU operations. Afaik, Litecoin (which uses Scrypt) has ASICs which I assume have displaced CPU and GPU mining. I haven't been following Litecoin since the ASICs were rumored to be released.

Yes I was relating to the gains from SHA256 ASICs. I'm less familiar with Scrypt ASICs in all matters (how much they cost, how much performance they gain, etc.)

But as you are aware Scrypt is designed in a manner that makes the TMTO particularly trivial to implement. Even then it isn't necessarily the case that a Scrypt ASIC would pay off (as you said). It seems they do but that is an empirical result that depends on the particulars of the algorithm. It is not the case that the lookup table approach for read-write scratchpads is equally trivial. Still feasible, possibly, but it isn't obvious.

Quote
Even if you are using dedicated hardware, to some extent it must be diluting the benefit of doing so by accessing them through generalized pipelines, 3 levels of caches (L1, L2, L3), and all the glue logic to make all those parts of the CPU work for the general purpose use case of the CPU. It is very unlikely that you've been able to isolate such that you are only activating the same number of transistors that the ASIC would for the specialized silicon.

Clearly true, but less clear there is anything yielding "orders of magnitude" here, nor, again, that the gain is enough to offset the costs of trying to reduce latency.

Quote
We try to minimize the advantage the ASIC will have, but we can not be at parity with the ASIC

Of course, it is a given that you can take some things off the chip and yield an ASIC with better marginal economics. But equally ASICs can (probably) never equal the economies of scale of general purpose CPUs, either on production or R&D. So the actual numbers matter. If If you can convincingly show orders of magnitude gain, then it is easy to dismiss the rest. Otherwise not.

Quote
In order to make it so it is not economical for the ASIC to convert the latency bound to compute bound, you end up making your "memory-hard" hash compute bound in the first place! That was a major epiphany.

I don't think you have shown it is economical, only that it is possible. In fact you have specifically agreed that the gain from ASICs vs. in-CPU dedicate circuits is likely relatively small, which is why you propose using the in-CPU circuits!

Quote
Thus why waste the time pretending the memory-hard complexity is helping you. If anything it is likely hurting you. Just go directly to using AES-NI for the hash. Which is what I did in spades.

Because I think this more clearly loses to economies of scale by replicating the relatively small AES hardware many, many times and getting rid of overhead of packaging, being instructions decoded inside a CPU (you can never eliminate all overhead from that regardless of how "tight" your algorithm) inside a computer (memory, fans, batteries, I/O, etc.) with a relatively tiny hash rate compared to a chip with 1000x AES circuits inside a mining rig with 100 chips. Memory gives physical size (and also its own cost, but mostly just size), which makes it impossible to replicate to whatever arbitrary degree is needed to amortize product-type and packaging overhead.

There may be economic arguments apart from mining efficiency why people mine on computers vs. minings rigs. But if those exist they must not exclusively rely on pure raw efficiency (because that will clearly lose by the argument in the previous paragraph) but to efficiency being "close enough" for the other factors to come into play. So again, the number matter. And once you resort to "close enough" then yet again numbers matter.

Anyway, I won't be surprised to be proven "wrong" (not really wrong because I'm not saying any of this won't work, just that it isn't clear that it will work) but I will be surprised if the market cap at which cryptonight ASICs appear isn't a lot higher than Scrypt ASICs (adjusting for differences in time period and such).

EDIT: there is actually a real world example of using just in-CPU crypto not being enough to compete with ASICs and that is Intels SHA extensions. They are not competitive with Bitcoin mining ASICs.



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on August 20, 2015, 09:49:43 AM
Scrypt uses the very efficient Salsa (ChaCha is nearly identical) one-way hash from Bernstein (not SHA256) which has a very low count of ALU operations. Afaik, Litecoin (which uses Scrypt) has ASICs which I assume have displaced CPU and GPU mining. I haven't been following Litecoin since the ASICs were rumored to be released.

Yes I was relating to the gains from SHA256 ASICs. I'm less familiar with Scrypt ASICs in all matters (how much they cost, how much performance they gain, etc.)

But as you are aware Scrypt is designed in a manner that makes the TMTO particularly trivial to implement. Even then it isn't necessarily the case that a Scrypt ASIC would pay off (as you said). It seems they do but that is an empirical result that depends on the particulars of the algorithm. It is not the case that the lookup table approach for read-write scratchpads is equally trivial. Still feasible, possibly, but it isn't obvious.

Indeed converting a more compute intensive variant that makes it less trivial to convert to compute bound thus makes it compute bound. Hehe. See where I ended up?

Even if you are using dedicated hardware, to some extent it must be diluting the benefit of doing so by accessing them through generalized pipelines, 3 levels of caches (L1, L2, L3), and all the glue logic to make all those parts of the CPU work for the general purpose use case of the CPU. It is very unlikely that you've been able to isolate such that you are only activating the same number of transistors that the ASIC would for the specialized silicon.

Clearly true, but less clear there is anything yielding "orders of magnitude" here, nor, again, that the gain is enough to offset the costs of trying to reduce latency.

Apologies perhaps the plurality on "orders" in my prior post is what you meant by being an overly generalized statement and in that case you are literally correct.

When I write orders in plural form, I include the singular form as one of the applicable cases.

I am more inclined to believe both Cryptonite and my pure AES-NI are within 1 - 2 orders of magnitude of any possible ASIC optimization, in terms of the relevant parameters of Hash/watt and Hash/$capital outlaw.

I would only argue that I feel a more complex approach such that Cryptonite uses, has a greater risk of slipping a bit maybe between 1 and 2 orders range. Again we are just making very rough guesses based on plausible theories.

We try to minimize the advantage the ASIC will have, but we can not be at parity with the ASIC

Of course, it is a given that you can take some things off the chip and yield an ASIC with better marginal economics. But equally ASICs can (probably) never equal the economies of scale of general purpose CPUs, either on production or R&D. So the actual numbers matter. If If you can convincingly show orders of magnitude gain, then it is easy to dismiss the rest. Otherwise not.

I almost mentioned Intel's superior fabs, but I didn't because well for one thing Intel has been failing to meet deadlines recent years. Moore's law appears to be failing.

If we are talking about a $4 billion Bitcoin market cap then yeah. But if we are talking about a $10 trillion crypto-coin world, then maybe not so any more.

Agreed the devil is in the details.

I think what I am trying to argue is that I've hoped to stay below 2 orders-of-magnitude advantage for the ASIC and by being very conservative with which CPU features are combined, then hopefully less than 1 order-of-magnitude. And that is enough for me in my consensus network design, because I don't need economic parity with ASICs in order to eliminate the profitability of ASICs in my holistic coin design.

And I am saying that the extra CPU features that Cryptonite is leveraging seem to add more risk, not less (not only risk of ASICs but risk that someone else found a way to optimize the source code). So I opted for simplicity and clarity instead. AES benchmarks against ASICs are published. Cryptonite versus ASIC benchmarks don't exist. I prefer the marketing argument that is easier to support.

In order to make it so it is not economical for the ASIC to convert the latency bound to compute bound, you end up making your "memory-hard" hash compute bound in the first place! That was a major epiphany.

I don't think you have shown it is economical, only that it is possible. In fact you have specifically agreed that the gain from ASICs vs. in-CPU dedicate circuits is likely relatively small, which is why you propose using the in-CPU circuits!

I think perhaps you are missing my point. In order to make it so it is only "possible" and not likely for the ASIC to convert the latency bound (of the CPU hash) to compute bound, then it appears it is likely to end up making the CPU hash compute bound any way by adding more computation between latency accesses, as I believe Cryptonite has done. Haven't benchmarked it though.

So that is one reason why I discarded all that latency bound circuitry and just K.I.S.S. on the documented less than 1 order-of-magnitude advantage for ASICs w.r.t. to AES-NI. And then I also did one innovation better than that (secret).

Thus why waste the time pretending the memory-hard complexity is helping you. If anything it is likely hurting you. Just go directly to using AES-NI for the hash. Which is what I did in spades.

Because I think this more clearly loses to economies of scale by replicating the relatively small AES hardware many, many times and getting rid of overhead of packaging, being instructions decoded inside a CPU (you can never eliminate all overhead from that regardless of how "tight" your algorithm) inside a computer (memory, fans, batteries, I/O, etc.) with a relatively tiny hash rate compared to a chip with 1000x AES circuits inside a mining rig with 100 chips.

In terms of Hashes/capital outlaw yes. But not necessarily true for Hashes/watt.

Even you admitted upthread that the 40 Gbytes/sec for ASIC on AES might not be an order-of-magnitude advantage in terms of Hashes/watt. I had seen some stats on relative Hashes/watt and I've forgotten what they said. Will need to dig my past notes for it.

But for our application, I wasn't concerned about the capital cost because the user has a $0 capital cost because they already own a CPU.

I was supremely focused on the Hashes/watt, because I know if I can get users to mine, and they don't notice a 1% increase in their electricity bill, then their Hashes/watt is infinite.

Also so that the rest of the user's CPU isn't tied up by the hashing, so they can mine and work simultaneously.

And thus I make ASICs relatively unprofitable. They may still be able to make a "profit" if the coin value is rising to level of hashrate invested (which would so awesome if you have 100 million users mining), but the ASICs can not likely dominate the network hashrate. Or at least they can't make it totally worthless for a user to mine, if there is perceived value to microtransactions and users don't want to hassle with an exchange just to use a service they want that only accepts microtransactions. But even that is not the incentive I count on to get users to mine.

There is a lot of holistic economics involved in designing your proof-of-work hash. It is very complex.

Memory gives physical size (and also its own cost, but mostly just size), which makes it impossible to replicate to whatever arbitrary degree is needed to amortize product-type and packaging overhead.

Agreed if Hashes/capital outlaw is your objective but you may also worse your relative (to ASIC) Hashes/watt. Maybe not, but again there are no Cryptonite vs. ASIC benchmarks so we can't know. There are AES benchmarks.

There may be economic arguments apart from mining efficiency why people mine on computers vs. minings rigs. But if those exist they must not exclusively rely on pure raw efficiency (because that will clearly lose by the argument in the previous paragraph) but to efficiency being "close enough" for the other factors to come into play. So again, the number matter. And once you resort to "close enough" then yet again numbers matter.

Yup. Well stated to point out that as get closer to parity on efficiencies then all sorts of game theory economic scenarios come into play. That is what I meant by it is complex.

Anyway, I won't be surprised to be proven "wrong" (not really wrong because I'm not saying any of this won't work, just that it isn't clear that it will work) but I will be surprised if the market cap at which cryptonight ASICs appear isn't a lot higher than Scrypt ASICs (adjusting for differences in time period and such).

Well no one can for any circuit of any reasonable complexity implemented on a complex circuit of the CPU which is somewhat of a blackbox. Berstein even complained recently that Intel isn't documenting all its timings.

The basic problem as I see it is that the only thing Cryptonote does to deal with ASICs is the proof-of-work hash. It doesn't look at other aspects of the system for ways to drive away ASICs. And in some sense, it shouldn't since Cryptonote like all existing crypto is subject to a 51% attack and thus it needs as much hashrate as it can get (up to some level).

EDIT: there is actually a real world example of using just in-CPU crypto not being enough to compete with ASICs and that is Intels SHA extensions. They are not competitive with Bitcoin mining ASICs.

Seems I had read something about that where Intel hadn't done the circuit necessary to be competitive. As you said, SHA is a complex algorithm. If there is some instruction which is fundamental and highly optimized, e.g. 64x64 it might be as optimized as it can get in silicon. I doubt Intel has fully optimized anything to that degree, but apparently on the AES benchmarks they got close.

It seems to me that a slower SHA hash is not something that can slow your computer to a crawl. Whereas with full disk encryption, Intel needs AES to be very fast.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on August 20, 2015, 09:56:58 AM
Indeed converting a more compute intensive variant that makes it less trivial to convert to compute bound thus makes it compute bound. Hehe. See where I ended up?

Well sure, certainly compute bound, but not necessarily useful at all (as you said in your first post).

Quote
It seems to me that a slower SHA hash is not something that can slow your computer to a crawl. Whereas with full disk encryption, Intel needs AES to be very fast.

Maybe maturity too. This is the first version of ISHAE (shipped for less than a month). I think AESNI has had at least two significant speed bumps (one for sure).

That may include seeing if people actually use it. Otherwise it may get slower (or even gone; as usual with these extensions you are supposed to always check CPUID before using it, so they can take it out whenever they want really). AESNI has the virtuous cycle of people using it because it was reasonably fast, which motivated Intel to make it even faster. Repeat. ISHAE may not.

Quote
the 40 Gbytes/sec for ASIC on

It was gigabits, btw. The main application for this stuff seems to be comm gear so usually quoted in bits (but yes an OOM faster than Intel in-CPU)



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on August 20, 2015, 11:40:49 AM
the 40 Gbytes/sec for ASIC on

It was gigabits, btw. The main application for this stuff seems to be comm gear so usually quoted in bits (but yes an OOM faster than Intel in-CPU)

But it wasn't stated whether it also consumed an OOM more electricity (than the Intel in-CPU).


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on August 21, 2015, 04:01:40 AM
I am proposing for discussion making the minimum per kb fee, F, a function of the difficulty as follows

F = F0*D0/DA

Where

F0 is a constant comparable to the current per kb fee
D0 is a constant comparable to the current difficulty
DA is the average difficulty of the blocks N-2K to N-K
N is the last block number
K is a constant for example 1000  

Rationale: The concept is that the difficulty is an approximate measure of the cost of hardware, bandwidth etc in terms of XMR. The minimum fee is would then approximate that adverse impact of the spam it is intended to prevent. Furthermore as the purchasing power of XMR increases the fee in XMR would decrease, while if the purchasing power of XMR decreases the fee in XMR would increase. The remaining suggested fees for both nodes and miners could also be dynamically set in terms of the minimum per kb fee.  


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on August 21, 2015, 01:35:11 PM
I am proposing for discussion making the minimum per kb fee, F, a function of the difficulty as follows

F = F0*D0/DA

Where

F0 is a constant comparable to the current per kb fee
D0 is a constant comparable to the current difficulty
DA is the average difficulty of the blocks N-2K to N-K
N is the last block number
K is a constant for example 1000  

Rationale: The concept is that the difficulty is an approximate measure of the cost of hardware, bandwidth etc in terms of XMR. The minimum fee is would then approximate that adverse impact of the spam it is intended to prevent. Furthermore as the purchasing power of XMR increases the fee in XMR would decrease, while if the purchasing power of XMR decreases the fee in XMR would increase. The remaining suggested fees for both nodes and miners could also be dynamically set in terms of the minimum per kb fee.  

interesting. The difficulty is possibly the best surrogate for the value of XMR. Of course there's the botnet / sysadmin issue (i.e., boost difficulty with abstract costs), but IMO thats not really addressable so its best ignored.

So basically use the existing numbers as constants.

What are the possible attack / manipulation vectors?

1. To drive the min per kb fee down to an unusable level, one would have to increase the hashrate of the network for an extended amount of time.
1. To drive the min per kb fee up one would have to decrease the hashrate of the network, which is achievable by attacking pools (if we don't find a way to escape mining centralization)

The problem, I think, will be determining the width of the rolling window (K). Prices can skyrocket overnight, and people wouldn't transact if the network difficulty doesn't catch up to the valuation overnight.

Or maybe prices will be more gradual, who knows.

With the above proposal, I think we would have to include a hard floor and ceiling. I.e., if the entire solar system is using monero (presuming we've figured out using superposition to overcome speed of light problems in a consensus network), the network difficulty will be astronomical. Would the above equation work?

0.01*865102700/865102700e12 = 0.00000000000001. I guess realistically the floor would be a piconero

Finally, when utilizing difficulty as a means to adjust fees, I think we would have to wait to implement this post primary emissions, or phase it in somehow (perhaps couple K to block reward). Right now the difficulty is high (presumably) due to the high block reward - its a murky connection between cost of mining, valuation of XMR, and ideal cost of transacting.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on August 21, 2015, 02:19:49 PM
There should be a factor for block subsidy.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on August 21, 2015, 04:48:48 PM
There should be a factor for block subsidy.


Yes. In order for the difficulty to be a surrogate for the value of XMR one needs to factor in the actual block reward, So the formula would look as follows:

F = F0*D0*RA/(DA*R0)

Where

F0 is a constant comparable to the current per kb fee
R0 is a constant comparable to the current block reward
RA is the average block reward of the blocks N-2K to N-K
D0 is a constant comparable to the current difficulty
DA is the average difficulty of the blocks N-2K to N-K
N is the last block number
K is a constant for example 1000  

One could replace RA by the average of base block reward namely by netting out fees and penalties; however then it would not track value as closely particularly if fees were to dominate the actual block reward in the future.

The question of the responsiveness of the difficulty to a sudden increase in price is valid to a degree; however it is tempered by the fact that the increase would be temporary, the increase would hit XMR holders at a time when they are experiencing a significant increase in the value of their XMR,  and in the worst case scenario any drop in transaction volume would likely temper the sudden price movement.  Attacks based on manipulating the hash rate I do not see as an issue here simply because any impact on fees is extremely minor compared to the risk of a 51% type attack.

My main concern here is to not create a situation similar to what is currently happening in Bitcoin with the blocksize debate. It is fairly easy to get the necessary consensus to fork an 18 month old coin, it is quite another matter to try to undo the fork 5 years later; particularly when there is no immediate threat. This is the critical lesson we must learn form Bitcoin.    


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on August 21, 2015, 05:32:27 PM
My main concern here is to not create a situation similar to what is currently happening in Bitcoin with the blocksize debate. It is fairly easy to get the necessary consensus to fork an 18 month old coin, it is quite another matter to try to undo the fork 5 years later; particularly when there is no immediate threat. This is the critical lesson we must learn form Bitcoin.    

Fee rates don't need a fork or consensus though. Miners can mine whatever they want. People can set their wallets to use whatever fees they want. Relays can relay whatever they want, but if that becomes an impediment to miners and wallets, then wallets can connect directly to miners.

However, I still agree there is merit in default/recommended fees following some sort of metric rather than just an arbitrary number.





Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on August 21, 2015, 05:44:53 PM
My main concern here is to not create a situation similar to what is currently happening in Bitcoin with the blocksize debate. It is fairly easy to get the necessary consensus to fork an 18 month old coin, it is quite another matter to try to undo the fork 5 years later; particularly when there is no immediate threat. This is the critical lesson we must learn form Bitcoin.    

Fee rates don't need a fork or consensus though. Miners can mine whatever they want. People can set their wallets to use whatever fees they want. Relays can relay whatever they want, but if that becomes an impediment to miners and wallets, then wallets can connect directly to miners.

However, I still agree there is merit in default/recommended fees following some sort of metric rather than just an arbitrary number.


Thanks for the clarification. It is a critical distinction that the minimum fee is not enforced in the blockchain but rather by convention. among the nodes. So if this became an issue node could simple start relaying transactions with lower than the minimum fee without creating a hard fork. 


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on August 21, 2015, 06:03:20 PM
There should be a factor for block subsidy.


Yes. In order for the difficulty to be a surrogate for the value of XMR one needs to factor in the actual block reward, So the formula would look as follows:

F = F0*D0*RA/(DA*R0)

Where

F0 is a constant comparable to the current per kb fee
R0 is a constant comparable to the current block reward
RA is the average block reward of the blocks N-2K to N-K
D0 is a constant comparable to the current difficulty
DA is the average difficulty of the blocks N-2K to N-K
N is the last block number
K is a constant for example 1000  

One could replace RA by the average of base block reward namely by netting out fees and penalties; however then it would not track value as closely particularly if fees were to dominate the actual block reward in the future.

The question of the responsiveness of the difficulty to a sudden increase in price is valid to a degree; however it is tempered by the fact that the increase would be temporary, the increase would hit XMR holders at a time when they are experiencing a significant increase in the value of their XMR,  and in the worst case scenario any drop in transaction volume would likely temper the sudden price movement.  Attacks based on manipulating the hash rate I do not see as an issue here simply because any impact on fees is extremely minor compared to the risk of a 51% type attack.

My main concern here is to not create a situation similar to what is currently happening in Bitcoin with the blocksize debate. It is fairly easy to get the necessary consensus to fork an 18 month old coin, it is quite another matter to try to undo the fork 5 years later; particularly when there is no immediate threat. This is the critical lesson we must learn form Bitcoin.  

Exactly. Thats why I wanted to get these conversations going. IMO, the fee thing is the most addressible component of the code that needs to be dehumanized ASAP.

Re: first bold - are we talking block reward (as in, fees + hard coded emission) or block subsidy (hard coded emission?)

Regarding these semantics, and the concept of using the difficulty + block reward as a surrogate for XMR value..

I think we could use fees per block as an additional surrogate (to fit somewhere in the equation). So, remove base block reward.

Ultimately, I think I'm just talking my way back to using the # of transactions / block as a surrogate for valuation. Because if we just use total fees, or fees / transaction, we run into the problem that high mixin transactions are huge, and thus have a higher fee, so the total reward per block can be skewed by an era of high mixin transactions.

here's a ~2X difficulty increase, where the emission has stopped and the average block reward is 2.2 (lots of transactions)

0.01 * 9e8 * 2.2 / (1.8e9 * 9) = 0.0012

here's a 2X difficulty increase, where the emission has stopped and average block reward is 0.2 (very few transactions)

0.01 * 9e8 * 0.2 / (1.8e9 * 9) = 1.1e-4

IMO, I don't agree with this behavior. To me, a lot of transactions indicate that the coin has higher value - the real world actually values the coin so it is used as a means to transfer value. Few transactions indicate the coin has less value, because the real world isn't using it that much. That the fee decreases is odd.

Perhaps it has to do with using the block subsidy and not just total fees. Here's using just the fees (using 0.001 as average fees for existing usage)

0.01 * 9e8 * 2.2 / (1.8e9 * 0.001) = 11

hrm...... did I do my math wrong? PMDAS





Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: MoneroMooo on August 21, 2015, 08:28:22 PM
Two points of interest:

- a moving min reward like this might cause delayed transactions to be spuriously rejected, if the min fee increases just after the tx is broadcasted. Maybe some kind of slack is warranted. However, if some slack is allowed, it is expected that some people will pay fees taking the slack into account, nnullifying the slack.

- some people might decide to "bet" on decreasing fees, and delay transactions till they can be sent with a lower fee. This might have knock on effects on the blockchain utilization: both in instantaneous decreased use, and burst use when the fees goes down.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on August 21, 2015, 10:07:54 PM
The slack would be built into the spread between the minimum fee for relay and the suggested fee to the user. If the user goes out of her way to pay just the minimum fee then she would run the risk of her transaction not being relayed. The averaging over a number of blocks is designed to dampen the impact of a sudden rise in difficulty. I am not convinced that users would go out of their way to delay transactions in order to speculate on lower fees since the cost of delaying the transaction would be larger in most cases than any possible fee savings.

The problem I see with using fees paid as a measure of the value of XMR is that unlike the difficulty this could be easy to game. A spammer could gradually ramp up the spam since as the total fees per block increase then individual per kb fees would drop. This would leave the actual base reward before the inclusion of fees and / or penalties as the optimal parameter for RA.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on August 23, 2015, 03:20:45 AM
The slack would be built into the spread between the minimum fee for relay and the suggested fee to the user. If the user goes out of her way to pay just the minimum fee then she would run the risk of her transaction not being relayed. The averaging over a number of blocks is designed to dampen the impact of a sudden rise in difficulty. I am not convinced that users would go out of their way to delay transactions in order to speculate on lower fees since the cost of delaying the transaction would be larger in most cases than any possible fee savings.

The problem I see with using fees paid as a measure of the value of XMR is that unlike the difficulty this could be easy to game. A spammer could gradually ramp up the spam since as the total fees per block increase then individual per kb fees would drop. This would leave the actual base reward before the inclusion of fees and / or penalties as the optimal parameter for RA.

I think the aspect that would make it easy to game is the timescale... the rolling window within which the average is calculated. I think if you made the window absurdly large it would become impossible to game. Perhaps 6 months in blocktime. The amount of spam required to ramp up over a 6 month period would be significant.

Hrm, and perhaps the other problem with using total fees paid as a measure of value of XMR is that the formula becomes recursive or is looped. The new minimum fee calculated will influence the total fees in the next block.

Gah, im trying to attack this with excel. its really uncharted territory. The only decentralized currency network we have to work with for example on how the difficulty will rise is bitcoin, and asics really borked that. Perhaps I need to study litecoin. I think if we could extract some models / functions from those data, we could get a better sense of how the system operates, and thus design the fee equation. Does anyone know how to do this? If you can get the raw data, I think I have some software that could pull some equations from the data.



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on August 24, 2015, 11:23:03 PM
The Bitcoin price data going back to July 10, 2010 can be downloaded from CoinDesk http://www.coindesk.com/price/ (http://www.coindesk.com/price/) It is the CoinDesk BPI. The difficulty data going back to January 3, 2009 can also be downloaded also from CoinDesk http://www.coindesk.com/data/bitcoin-mining-difficulty-time/ (http://www.coindesk.com/data/bitcoin-mining-difficulty-time/). In both cases click on Export in the chart and select CSV Chart Data. There was some Bitcoin trading before July 18, 2010 on New Liberty Standard and Bitcoin Market both of which have gone defunct. The New Liberty Standard site is still up with the 2009 data. http://newlibertystandard.wikifoundry.com/page/2009+Exchange+Rate (http://newlibertystandard.wikifoundry.com/page/2009+Exchange+Rate) however the early 2010 data is now gone. I did manage to download the early 2010 data in November 2013 before it was lost. Send me a PM and I can send the file. LibreOffice (.ods) format only to preserve the download timestamp.

One thing to keep in mind is that difficulty measure price in terms of computing resources. As for the time period for Bitcoin the best is 2010 - 2012. Still there is an evolution here from CPU, GPU and then some FPGA before ASIC mining. One other option that would be clean is actually Monero over the last 14 months since in the Monero case the technology has not changed.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on August 24, 2015, 11:52:06 PM
sweet, thanks for the leads. Currently hunting down litecoin info.

edited to add: got the difficulty / time.

i think I can hack this to get price history... its somewhere in here

view-source:https://coinplorer.com/Charts?fromCurrency=LTC&toCurrency=USD


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on August 24, 2015, 11:53:14 PM
The Bitcoin price data going back to July 10, 2010 can be downloaded from CoinDesk http://www.coindesk.com/price/ (http://www.coindesk.com/price/) It is the CoinDesk BPI. The difficulty data going back to January 3, 2009 can also be downloaded also from CoinDesk http://www.coindesk.com/data/bitcoin-mining-difficulty-time/ (http://www.coindesk.com/data/bitcoin-mining-difficulty-time/). In both cases click on Export in the chart and select CSV Chart Data. There was some Bitcoin trading before July 18, 2010 on New Liberty Standard and Bitcoin Market both of which have gone defunct. The New Liberty Standard site is still up with the 2009 data. http://newlibertystandard.wikifoundry.com/page/2009+Exchange+Rate (http://newlibertystandard.wikifoundry.com/page/2009+Exchange+Rate) however the early 2010 data is now gone. I did manage to download the early 2010 data in November 2013 before it was lost. Send me a PM and I can send the file. LibreOffice (.ods) format only to preserve the download timestamp.

One thing to keep in mind is that difficulty measure price in terms of computing resources. As for the time period for Bitcoin the best is 2010 - 2012. Still there is an evolution here from CPU, GPU and then some FPGA before ASIC mining. One other option that would be clean is actually Monero over the last 14 months since in the Monero case the technology has not changed.

It can't be assumed that the technology will never change though. That's not as big a deal for transaction fees where as I explained earlier it is fundamentally a guideline, so the worst case is you end up with a poor guideline but it doesn't fail altogether. It could be a bigger problem if using this method for something else.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on August 25, 2015, 12:30:35 AM
...
It can't be assumed that the technology will never change though. That's not as big a deal for transaction fees where as I explained earlier it is fundamentally a guideline, so the worst case is you end up with a poor guideline but it doesn't fail altogether. It could be a bigger problem if using this method for something else.

Actually that is not the assumption here. The assumption is that if the price of technology falls by say 1000x then the spammer would need 1000x as much spam to do the same damage.

Edit: We are talking about a per KB fee here after all.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on August 25, 2015, 12:45:48 AM
...
It can't be assumed that the technology will never change though. That's not as big a deal for transaction fees where as I explained earlier it is fundamentally a guideline, so the worst case is you end up with a poor guideline but it doesn't fail altogether. It could be a bigger problem if using this method for something else.

Actually that is not the assumption here. The assumption is that if the price of technology falls by say 1000x then the spammer would need 1000x as much spam to do the same damage.

Edit: We are talking about a per KB fee here after all.

By technology never changing I was referring to a regime change in difficulty as happened with Bitcoin ASICs. The difficulty increased by a factor of 1000 (or whatever the number) without a corresponding increase in broader technology that would relate to the cost of transaction processing.



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on September 01, 2015, 08:44:42 PM
Okay, so one month of this thread being up. A short recap -

1. Using difficulty as a proxy for price as a way to automatically adjust minimum xmr / kb transaction fees. Potentially has legs, but the exact formula is yet to be determined.

2. Proof of work discussion re: asic resistance. A lot of it went over my head, but its the main thing being discussed on page 2. Cuckoo cycle seems fascinating, uses some sort of graph theory.

Going forward, I'm going to try and extract a model from the existing litecoin and bitcoin data, as they are two systems that have seen the full evolution of adoption, price discovery, and mining hardware. Whether or not these functions / models will be useful is unknown.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on September 03, 2015, 11:43:04 AM
I've been thinking about how Monero could pull off some form of light node / mobile wallet / electrum-style thing.

On-network transaction creation:

Basically, create a way in which you can create a transaction without having the blockchain downloaded. This is interesting in monero, of course, because you need the entire blockchain to craft a transaction, in order for the wallet to pick some outputs to mix with.

What if instead you could broadcast the skeleton of a transaction to the network, and other nodes could fill in the outputs necessary to complete your mixin. These nodes then send the transaction back to the node transacting.

So, I want to make a transaction for 22.2 XMR, with a mixin of 3. The wallet creates a network request essentially asking for other outputs of 10, 2 and 0.2. Perhaps this "file" sits in other mempools. When in the mempool of a remote daemon, the daemon recognizes it needs outputs, and puts in some candidates. When transaction is filled in, the daemon relays it back to the network. Because different daemons could create the transaction differently, you'd have different candidate sets. The requesting daemon then randomly picks one of these, the actual outputs are entered and then the full transaction is sent to the network.

Of course a problem might be a loss of privacy, because a snooper could skim the mempool for incomplete transactions, and then compare these incomplete transactions with finalized transactions on the blockchain. Although the stealth addresses might take care of this, because a set of outputs in a candidate set from the mempool would look completely different than the final set in the transaction recorded on the blockchain.

Microchain downloading
Alternatively, these type of clients could download portions of the blockchain on demand from multiple peers and only maintain a microblockchain containing blocks with their outputs. So, when a wallet goes to create a transaction, it instructs the daemon to just start downloading random chunks from the network. When enough chunks have been obtained to craft the transaction, the transaction is created and sent to the network. The other nodes participating as peers in this sub-network could have chunk candidates ready for transmission - i.e., the software scans the data.mdb file for useful chunks - essentially, seperating the wheat (blocks with actual transactions to mix with) from the chaffe (the rest of the blockchain filled mostly with coinbase payments at this point). This separation would become less useful over time, but still some block chunks are probably more useful than others.

Users of this client could then occasionally "freshen up" their privacy set - clear their microchain and obtain a new chain to craft transactions from. 

Again, this has the possibility of affecting privacy, in that the set of transactions your mixin is coming from would be observable on the network, if the stealth addressing doesn't masque this. Also, on your device, if someone obtained your microchain, they could potentially deduce which transactions are yours on the main chain.

And ultimately these would require implementation of MRL4 for ultimate untraceability, but thats the case with anything.

Well, that was one way to spend my morning coffee!


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on September 03, 2015, 11:56:24 AM
because you need the entire blockchain to craft a transaction, in order for the wallet to pick some outputs to mix with.

Nah just a sampling of it. You can even grab a bunch of outputs of every size ahead of time (say when a phone is on WiFi) store and use when needed.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on September 03, 2015, 12:31:04 PM
because you need the entire blockchain to craft a transaction, in order for the wallet to pick some outputs to mix with.

Nah just a sampling of it. You can even a bunch of outputs of every size ahead of time (say when a phone is on WiFi) store and use when needed.

right! exactly! that's what I guess the microchain downloading would do.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: MoneroMooo on September 03, 2015, 08:37:54 PM
If your ISP has spied on your connection (and let's face it, they prety much all do, whether for themselves or on behalf of people they can't easily say no to), and thus know which blocks you have received, any output used as input to a ring signature can be ruled out as yours if it was created in a block you did not download. Till we get I2P anyway.



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on September 03, 2015, 09:04:57 PM
If your ISP has spied on your connection (and let's face it, they prety much all do, whether for themselves or on behalf of people they can't easily say no to), and thus know which blocks you have received, any output used as input to a ring signature can be ruled out as yours if it was created in a block you did not download. Till we get I2P anyway.

That's assuming you are only using one ISP. For mobile devices that will only be true if you never switch between cellular and WiFi, and with WiFi (or wired) only if you never switch locations. Of course they may all be spying and they may share data but they also may not. It seems there could be a lot of gaps.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on September 03, 2015, 09:26:19 PM
Yeah I imagined that i2p would be integral to any non-full-blockchain transaction creation for the best privacy.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: owm123 on September 04, 2015, 12:37:25 AM
Yeah I imagined that i2p would be integral to any non-full-blockchain transaction creation for the best privacy.

Why i2p not tor? From what I understand, with i2p you can only access others  who are also in i2p network. So would there be two monero blockchains? One within i2p darknet, and second in "regular" internet?


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on September 04, 2015, 01:22:12 AM
Yeah I imagined that i2p would be integral to any non-full-blockchain transaction creation for the best privacy.

Why i2p not tor? From what I understand, with i2p you can only access others  who are also in i2p network. So would there be two monero blockchains? One within i2p darknet, and second in "regular" internet?

bah BAM!

https://www.reddit.com/r/Monero/comments/2ti53m/why_is_monero_aiming_to_integrate_i2p/

Quote
Tor is optimised for low-bandwidth clients and high-bandwidth exit nodes, whereas i2p is optimised for internal hidden services. Thus, i2p is significantly faster when routing internal traffic.

i2p's floodfill routers (roughly analogous to Tor's directory servers) aren't hardcoded

i2p is a packet-switched network (as opposed to circuit-switched) which makes it more robust
no client-only peers, all peers route traffic and assist in building and running short-lived tunnels

TCP and UDP are supported, which means that things like OpenAlias can still work over i2p


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on September 04, 2015, 01:40:31 AM
Yeah I imagined that i2p would be integral to any non-full-blockchain transaction creation for the best privacy.

Why i2p not tor? From what I understand, with i2p you can only access others  who are also in i2p network. So would there be two monero blockchains? One within i2p darknet, and second in "regular" internet?

No there won't be two blockchains. Everything will be relayed bidirectionally between the open and hidden networks.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: owm123 on September 04, 2015, 11:30:43 PM
Yeah I imagined that i2p would be integral to any non-full-blockchain transaction creation for the best privacy.

Why i2p not tor? From what I understand, with i2p you can only access others  who are also in i2p network. So would there be two monero blockchains? One within i2p darknet, and second in "regular" internet?

No there won't be two blockchains. Everything will be relayed bidirectionally between the open and hidden networks.

Any idea when i2p support will be implemented into Monero?


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on September 13, 2015, 05:04:00 PM
For future reference, a lot of interesting work regarding pool disincentivization

https://cs.umd.edu/~amiller/nonoutsourceable.pdf

https://bitcointalk.org/index.php?topic=309073.0

which is  (obviously) an old problem in bitcoin, but I had no idea they were digging into this 2 years ago. And the thread seems to have been resurrected.

Unfortunately, I don't see pooling discouragement anywhere on the monero research and development goals. There's smart mining, but that suffers a chicken and egg problem IMO. If smart mining happens before / during a surge in adoption, then great, everyone's solo mining. If pools maintain their prominence in the network hashrate, people may not adopt monero due to the centralization. I.e., if people begin to appreciate the centralization of bitcoin as a flaw, they will look for a truly decentralized cryptocurrency. If monero, at that point in time, still has pools and an insurmountable network architecture (i.e., pools will not accept a fork that inhibits pooling), people will look elsewhere, or at worst, view cryptocurrencies as a failed experiment. The value proposition of all cryptocurrencies is the decentralization. That nothing is done to secure this fundamental nature of these networks is mind boggling to me.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on September 14, 2015, 02:48:55 PM
Crazy idea I haven't fully thought through, but need to get it out of my head so I can get back to work.

Instead of the network being broken up, functionally, into miners, nodes, and transaction makers (which is they way things have devolved), would it be possible to combine making a transaction with mining? So essentially you can't mine without making a transaction, and you can't make a transaction without mining.

So, when the wallet crafts a transaction, it also crafts a proof of work. So - It takes everything in the mempool, creates a block candidate with the new transaction, and then hashes the block with a nonce or whatever to find a POW. In this design, the mempool is filled with failed block candidates. We'd have to find a way to commit each mining / transaction attempt though, such that one couldn't simply create a million transactions and pick the one thats works. There's also some obvious flaws in terms of bootstrapping. I.e., you can't mine without transacting, and you can't transact without coins.

edited to add: Duh. Stupid. Easy exploit - sybil-type attacks, setting up other wallets / accounts that you send transactions too in order to "mine". Though transaction fees could eat away at this.

edited to add:

so the software creates a transaction, pulls from the mempool some failed block candidates, extracts from those failed block candidates novel transactions (removes duplicates), and then hashes the whole thing to provide a POW.

hrm, perhaps make something called a failchain, which is a mini blockchain containing all failed block candidates. The failchain lives in ram, similar to the mempool. Once a transaction/block solution is found, that top block is added to the primary blockchain, and the failchain is destroyed. I think, perhaps, this fail chain would prevent one from spamming the POW-space with extra work. Because only unique transactions would be removed by the de-duplication mechanism. Thus, if you submitted multiple transaction-block-solutions, the failchain would reject them if they contain the same transaction as the POW-transaction, and if they were all unique, then when the block is finally found, all of those transactions will be processed (with their associated fees).

how the fail chain would be enforced, though.... hrm.

what's interesting, in general, is that this adds an in-network cost to mining.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on September 14, 2015, 11:13:33 PM
Crazy idea I haven't fully thought through, but need to get it out of my head so I can get back to work.

Instead of the network being broken up, functionally, into miners, nodes, and transaction makers (which is they way things have devolved), would it be possible to combine making a transaction with mining? So essentially you can't mine without making a transaction, and you can't make a transaction without mining.

Yes there are various proposals like that

https://bitcointalk.org/index.php?topic=1177633.0

https://www.reddit.com/r/Bitcoin/comments/2m8sh9/am_i_missing_something_blockchain_without_bitcoin/cm272vv

I proposed something similar but a bit different in MRL discussions, but it isn't fully designed into something workable (nor are the above proposals, but maybe they are closer)

Probably others too


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: tromp on September 16, 2015, 09:32:14 PM
The next attempt at an improvement over Scrypt was (which I wrote down in 2013 well before the Cryptonite proof-of-work hash was published) was to not just read pseudo-randomly as described above, but to write new pseudo-random values to each pseudo-randomly accessed location. This defeats storing only every Nth random access location. However it is still possible to trade latency for computation by only storing the location that was accessed to compute the value to store instead of storing the entire value. The compute bound version of the algorithm can not be defeated by reading numerous locations before storing a value because the starting seed need only be stored.

I fail to make sense of the last two sentences. Could you elaborate?

Quote
Thus I concluded it is always possible to trade computation for latency in any memory-hard hash function.

Why limit yourself to hash functions? See

http://cryptorials.io/beyond-hashcash-proof-work-theres-mining-hashing/


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: tromp on September 16, 2015, 09:39:04 PM
An algorithm with a much bigger memory footprint, like cuckoo, without a corresponding increase in verification time, might be sized for L4 though.

This focus on cache sizes may be unwarranted.

If you benchmark Cuckoo Cycle for increasing memory sizes,
you only see a small slowdown in access latency(~40%) when moving from
fitting in the 12MB/core on-chip cache to moving way beyond that.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on September 16, 2015, 10:23:11 PM
An algorithm with a much bigger memory footprint, like cuckoo, without a corresponding increase in verification time, might be sized for L4 though.

This focus on cache sizes may be unwarranted.

If you benchmark Cuckoo Cycle for increasing memory sizes,
you only see a small slowdown in access latency(~40%) when moving from
fitting in the 12MB/core on-chip cache to moving way beyond that.

Yup. I think I was the one who pointed that out to you.

My point in mentioning cache is that cache benefits from physical proximity. You can't put two things in the same space, so if you want to introduce more parallelism (horizontal or vertical) in the processing elements you will have to move the memory farther away, incurring a cost, certainly in latency and probably in power usage too. Okay the cost may not be that large, but it does exist, and becomes an obstacle to overcome before even breaking even.

Obviously things like more use of 3D in microelectronics will shift the numbers around but the principle of finite proximal space will remain.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on October 20, 2015, 04:39:26 PM
just an idea that just burbled up in my mind. Posting for 2 reasons - if it exists, someone will come in and go "someone already thought of this". Or if it doesn't exist, someone will go "it won't work because X"

Integrate an Audio signal into the POW function (somehow)

Rationale: all personal computing devices have audio hardware. There might be a way to use the audio signal to create a fingerprint of the individual computer that will prohibit pooling.

I imagine it as such, though this could be totally off due to an incomplete understanding of POW / pooling protocols.

The mining algorithm starts to attempt to find a solution for a block. Upon starting of this function, an audio recording is initiated (recording a wav file). No microphone would be needed - no audio board will read true silence in the circuitry, so the wav recording will contain electronic noise. Each time a block solution is attempted, a hash of the wav file is included in the hash of the POW. So basically, the duration of the current attempt to find a block solution will equal the duration of the wav file to find a solution. (that matters somehow... I can't figure out why)

Maybe? Perhaps?



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: MoneroMooo on October 20, 2015, 09:48:30 PM
As a general rule of thumb, a party cannot rely on another party to cooperate in doing something detrimental to the second party.

If the network wants to detect whether blocks A and B are found by Alice, Alice will make sure to generate two different fingerprints.

Besides, you wouldn't want to embed a fingerprint of the miner in a block for a currency that prides itself on being unlinkable.

Thermal noise is used as a random source, so I'm not even sure you could get any kind of fingerprint by reading off an audio device anyway. Not that you can read off the hardware. My VMs certainly don;t have an HW audio device,


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on December 02, 2015, 06:32:09 PM
moneromoo, can you add this functionality, or does it already exist?

I noticed here that this dude wants to check his paper wallet balance with a viewkey

https://www.reddit.com/r/Monero/comments/3v631e/how_do_i_use_my_view_key_to_view_my_balance/

so obviously he can do that.

But what if they want to check the balance in an offline computer (for increased security or whatever)

can simplewallet access the blockchain without the daemon being syncd? I've noticed that simplewallet gets mad when the daemon isn't syncd. Surely it can just access the blockchain db.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on December 02, 2015, 06:50:23 PM
moneromoo, can you add this functionality, or does it already exist?

I noticed here that this dude wants to check his paper wallet balance with a viewkey

https://www.reddit.com/r/Monero/comments/3v631e/how_do_i_use_my_view_key_to_view_my_balance/

so obviously he can do that.

But what if they want to check the balance in an offline computer (for increased security or whatever)

can simplewallet access the blockchain without the daemon being syncd? I've noticed that simplewallet gets mad when the daemon isn't syncd. Surely it can just access the blockchain db.

Seems like the wallet should still work even if the computer is offline, but maybe with a warning of the highest block is "too old"



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on January 20, 2016, 06:06:11 PM
So I was reading this
http://www.scribd.com/doc/273443462/A-Transaction-Fee-Market-Exists-Without-a-Block-Size-Limit#scribd

and my thoughts started to drift when I encountered the concept that orphanization is one of the impediments to picking what to mine and the whole block size fee market debate etc...

Is there any work in this space regarding what could be called sister blocks, or fusion blocks?

Basically, the way I understand it (and granted, my assumptions could be flawed) is that there exists a set of transactions in the mempool. We'll just use 5 here

Trans1
Trans2
Trans3
Trans4
Trans5

If miner A decides to put 1,2,3 in his block (block A), and miner B decides to put 3,4,5 in his block (block B), they are both technically valid blocks (they both have the previous block's hash and contain valid transactions from the mempool). However, due to the nature of satoshi consensus, if block A makes it into the chain first, block B becomes orphan - even though it is entirely valid.

It's even easier to understand the inefficiency of satoshi consensus if block A has 1,2,3 and block B has 4,5. In this case, there's really no reason both blocks aren't valid.

I see now as I continue to think about this the problem lies in the transaction fees associated with each transaction, for if they exist in two blocks, which block finder gets the reward? But this isn't an intractable problem.

Essentially what I'm thinking is that you can imagine these two blocks existing as blebs on the chain.
                         .
._._._._._._._._._./
                        \,

each dot is a block, and the comma indicates a sister block
in current protocol, this would happen
                         ._._._
._._._._._._._._._./
                        \,_,

And eventually one chain would grow longer (which is ultimately influenced by bandwidth) and the entire sister chain would be dropped, and if your node was on that chain you'd experience a reorg (right?).

why couldn't something be implemented where the above fork turns into a bleb

                         .
._._._._._._._._._./\.
                        \,/

which is eventually resolved to a fusion block

._._._._._._._._._._!_._


where the ! indicates a fusion block. When encountering a potential orphan scenario (daemon receives two blocks in close proximity, or already has added a block but then receives a similar block for the same block height) instead of the daemon rejecting one as orphan, it scans the sister block as a candidate for fusion. There would be some parameters (X% of transactions overlap, only concurrent block height are candidates (this is effectively the time window)). As part of this, the system would somehow need to be able to send transaction fees to different blockfinders, but again this seems tractable (though I await to be schooled as to why its not possible). In addition, the block reward itself would need to be apportioned.

Or is this what a reorg does? The way I understand reorgs, this is different than a reorg.

Though upon creation of the fusion block a reorganization would have to occur. So at the cost of overall bandwidth we provide a countermeasure for the loss of economic incentive for large blocks.

And one problem to address is that you would need a new block header for the fusion block, but this could really just be the hash of the two sister blocks. Both sisters are valid, therefore the hash of those valid blocks is valid.

Ok back to work.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 20, 2016, 08:16:22 PM
...

No closed source. The key would be produced publicly at a ceremony.
...

Using what operating system and firmware?

Of course they will need to convince the public the master key is sound. Or use my idea above of having multiple mixers and timing them out. I believe there is a solution, yet I will agree the current organization of their plans seems legally and structurally flawed.

That is why I say we can transition and beat them. But the technology is real anonymity. If you want real anonymity, you have to find a way to use their technology. Period. (and I have been studying this for a long time)

This does not answer my question which is cut and dry and goes to the heart of the trust issue.

If you apply that line of thinking, then every anonymity is insecure because operating systems and computers are never 100% secure.

I already proposed how to spread the risk out and make it non-systemic.

Note that Monero (Cryptonote one-time rings and every other kind of anonymity technology) also has systemic risk due to combinatorial analysis cascade as more and more users are unmasked with meta-data and overlapping mixes.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on January 20, 2016, 08:51:09 PM
...

No closed source. The key would be produced publicly at a ceremony.
...

Using what operating system and firmware?

Of course they will need to convince the public the master key is sound. Or use my idea above of having multiple mixers and timing them out. I believe there is a solution, yet I will agree the current organization of their plans seems legally and structurally flawed.

That is why I say we can transition and beat them. But the technology is real anonymity. If you want real anonymity, you have to find a way to use their technology. Period. (and I have been studying this for a long time)

This does not answer my question which is cut and dry and goes to the heart of the trust issue.

If you apply that line of thinking, then every anonymity is insecure because operating systems and computers are never 100% secure.

I already proposed how to spread the risk out and make it non-systemic.

Note that Monero (Cryptonote one-time rings and every other kind of anonymity technology) also has systemic risk due to combinatorial analysis cascade as more and more users are unmasked with meta-data and overlapping mixes.

Proprietary software solutions have by their very nature a centralized systemic risk that Free Libre Open Source software solutions do not. The type of risks you describe in Monero are trivial compared to the risk of the DRM in the operating system used to generate master key in a centralized proprietary solution such as the one you propose. Furthermore I still do not have an answer to what is a straight forward yes or no question. 


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: vokain on January 20, 2016, 08:59:37 PM
...

No closed source. The key would be produced publicly at a ceremony.
...

Using what operating system and firmware?

Of course they will need to convince the public the master key is sound. Or use my idea above of having multiple mixers and timing them out. I believe there is a solution, yet I will agree the current organization of their plans seems legally and structurally flawed.

That is why I say we can transition and beat them. But the technology is real anonymity. If you want real anonymity, you have to find a way to use their technology. Period. (and I have been studying this for a long time)

This does not answer my question which is cut and dry and goes to the heart of the trust issue.

If you apply that line of thinking, then every anonymity is insecure because operating systems and computers are never 100% secure.

I already proposed how to spread the risk out and make it non-systemic.

Note that Monero (Cryptonote one-time rings and every other kind of anonymity technology) also has systemic risk due to combinatorial analysis cascade as more and more users are unmasked with meta-data and overlapping mixes.

Proprietary software solutions have by their very nature a centralized systemic risk that Free Libre Open Source software solutions do not. The type of risks you describe in Monero are trivial compared to the risk of the DRM in the operating system used to generate master key in a centralized proprietary solution such as the one you propose. Furthermore I still do not have an answer to what is a straight forward yes or no question.  

I am imagining that the type of people designing such a technology would do better than generate a masterkey on Windows et al. I'm actually imagining purpose-built, (encouragedly) auditable software and maybe even hardware.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on January 20, 2016, 09:04:58 PM
...
I am imagining that the type of people designing such a technology would do better than generate a masterkey on Windows et al. I'm actually imagining purpose-built, auditable software and maybe even hardware.

Auditable by whom?

It comes down to Free Software vs Proprietary software. The same is true for the hardware. There is a reason why my question is being avoided here.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: vokain on January 20, 2016, 09:06:11 PM
...
I am imagining that the type of people designing such a technology would do better than generate a masterkey on Windows et al. I'm actually imagining purpose-built, auditable software and maybe even hardware.

Auditable by whom?

It comes down to Free Software vs Proprietary software. The same is true for the hardware. There is a reason why my question is being avoided here.

By the attendees of said masterkey-generation ceremony.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on January 20, 2016, 09:10:50 PM
...
I am imagining that the type of people designing such a technology would do better than generate a masterkey on Windows et al. I'm actually imagining purpose-built, auditable software and maybe even hardware.

Auditable by whom?

It comes down to Free Software vs Proprietary software. The same is true for the hardware. There is a reason why my question is being avoided here.

By the attendees of said masterkey-generation ceremony.

Actually by anyone who uses the currency. The role of the attendees is to verify that all the software has not changed between what was used and what is released to the public.

Edit: The minute one tries to protect "intellectual property" at any level the trust is gone.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 20, 2016, 09:11:02 PM
...

No closed source. The key would be produced publicly at a ceremony.
...

Using what operating system and firmware?

Of course they will need to convince the public the master key is sound. Or use my idea above of having multiple mixers and timing them out. I believe there is a solution, yet I will agree the current organization of their plans seems legally and structurally flawed.

That is why I say we can transition and beat them. But the technology is real anonymity. If you want real anonymity, you have to find a way to use their technology. Period. (and I have been studying this for a long time)

This does not answer my question which is cut and dry and goes to the heart of the trust issue.

If you apply that line of thinking, then every anonymity is insecure because operating systems and computers are never 100% secure.

I already proposed how to spread the risk out and make it non-systemic.

Note that Monero (Cryptonote one-time rings and every other kind of anonymity technology) also has systemic risk due to combinatorial analysis cascade as more and more users are unmasked with meta-data and overlapping mixes.

Proprietary software solutions have by their very nature a centralized systemic risk that Free Libre Open Source software solutions do not. The type of risks you describe in Monero are trivial compared to the risk of the DRM in the operating system used to generate master key in a centralized proprietary solution such as the one you propose. Furthermore I still do not have an answer to what is a straight forward yes or no question.  

The masterkey is generated once and only the public key is retained. As long as no one saw nor can recover the private key before it was discarded, then there is nothing proprietary remaining in the use of the Zerocash open source. The Zerocash open source code requires a public key to be pasted in. It is the public (ceremony) generation of that key, which determines whether anyone had access to the private key when the public key was created.

DRM has nothing to do with it all. Thus I assume you don't understand the issue.

The only issue is whether the public key can be computed at a public ceremony and the private key was securely discarded. So for example, they could use any computer, encase it in lead before running the computation, and no external connection to the computer other than the screen which reads out the public key.

Then slide the computer into a barrel of acid so that it is permanently destroyed. All done at a public ceremony so there can be no cheating.

Of course one could envision elaborate/exotic means of cheating, such as using radio waves to communicate the private key out to external actor, but again that is why I wrote encase it in lead. There is the issue of how to destroy it while not momentarily removing it from its communication barrier. But I am confident these physics issues can be worked out to a sufficient level of trust.

As for trust, not even the Elliptic Curve Cryptography and other math we use for crypto can be 100% trusted. So if you start arguing silly about 100% trust, then it is safe to ignore as loony.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on January 20, 2016, 09:27:54 PM
damn. saw this thread bumped and got excited it was in response to my fusion block idea. Instead its this zeroknowledge vaporcoin stuff. You better bring it all back in somehow to MONERO improvement technical discussion lest I wield my moderation powers and shrinkify everything.

is monero implementing ZKP? Last I heard thats a big negative.

This one:

Quote
Note that Monero (Cryptonote one-time rings and every other kind of anonymity technology) also has systemic risk due to combinatorial analysis cascade as more and more users are unmasked with meta-data and overlapping mixes.

might have some legs. As you mentioned, I think the meta-data (what can be referred to as out-of band) can't really be addressed by any protocol. No computer code can stop you from posting on facebook the exact time that you purchased a drone on amazon. I think the general idea though is that with monero (and others) any analysis has a much more steep effort wall than bitcoin.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 20, 2016, 09:30:25 PM
...
FUD. The ceremony is only to computer a public key, nothing else. No other software has to be audited. Only need to confirm that the private key was not communicated from the computer to any one. Period.

How do you know that the public key you see on the screen is the one that was computed and not one that was pre computed before the computer was "placed in lead"?

Edit: DRM in the OS has everything to do with this since it is the perfect place to hide the private key. That is what DRM is designed to do hide private keys.

The hardware has to be audited. But we also have to audit our hardware that we use to run Cryptonote. If Intel is planting spies in the hardware, then we are screwed.

100% trust is impossible. And this is another reason I deprioritized anonymity. It is a clusterfuck.

Also I think perhaps Zerocash was working on a way to generate the public key decentralized, but I haven't kept up with progress on that.

Indeed Zerocash could end up being a Trojan Horse (a way to get fiat in the back door) and that is why I made my proposal to use them only as ephemeral mixes that die periodically, so then we will know if the key was compromised or not.

The result of my proposal is:

  • Stolen coins isn't systemic to the overall coin (same as losing some coins to Mt. Gox and Cryptsy isn't), and at least participants get ongoing ceremonies to get better and better at auditing the hardware.
  • No anonymity is ever lost.
  • No NET coin supply is ever created out-of-thin-air (instead some people lose coins if they chose an insecure mixer that had a compromised key), which is also the case for both Zerocash and RIngCT where coin supply could be created out of thin air and we would never know it due to a bug in cryptography.

That will kick ass on Monero, because if I pass through the mixer, I know my anonymity is provable and I know I didn't lose my coins. It is only people who still sitting inside the mixer who risk losing coins. Everything has a risk. I would much rather the microscopic risk of a compromised key (causing me to lose some coins) to the sure risk of meta-data correlation in Monero which can send me to jail! Surely I would be judicious about not mixing all my coins at the same time and not all in the same mixer.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GreekBitcoin on January 20, 2016, 09:34:45 PM
Quote
But I am confident these physics issues can be worked out to a sufficient level of trust.

Only need to confirm that the private key was not communicated from the computer to any one.

I find this kinda weak against your general absolutism. "So Simple Yet So Complex".


After all, what stops all 3 letter agencies, who can own blockchains and can do analysis and attacks etc, to stage the whole thing? Will i be allowed to check that computer?

I mean, i have near to zero understanding of cryptography, but your search for the perfect/ideal solution looks like making you ready to take a huge and dangerous bet.  


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 20, 2016, 09:40:08 PM
Quote
But I am confident these physics issues can be worked out to a sufficient level of trust.

Only need to confirm that the private key was not communicated from the computer to any one.

I find this kinda weak against your general absolutism. "So Simple Yet So Complex".


After all, what stops all 3 letter agencies, who can own blockchains and can do analysis and attacks etc, to stage the whole thing? Will i be allowed to check that computer?

I mean, i have near to zero understanding of cryptography, but your search for the perfect/ideal solution looks like making you ready to take a huge and dangerous bet.  

I proposed ephemeral mixers based on Zerocash technology. They will be ferreted out if they are doing this, because it will be known that the key was compromised when the mixer expires and everyone has to cash out of the mixer back into the public coin. The bastards can't keep doing it over and over again. The participants will get wise as to the methods the attackers are using.

I am not absolutist. Rather I think correctly and realistically when I weigh marketing, tradeoffs, and delusion as follows:

That will kick ass on Monero, because if I pass through the mixer, I know my anonymity is provable and I know I didn't lose my coins. It is only people who still sitting inside the mixer who risk losing coins. Everything has a risk. I would much rather the microscopic risk of a compromised key (causing me to lose some coins) to the sure risk of meta-data correlation in Monero which can send me to jail! Surely I would be judicious about not mixing all my coins at the same time and not all in the same mixer.

Marketing and design are holistically joined at the hip. Those fools who said the marketing can come later are clueless.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 20, 2016, 10:08:43 PM
One more point I considered in my holistic analysis is that for most transactions we can't be anonymous. Thus anonymity is more suited to those who want to receive some payment anonymously and hide the funds there and extract them only to public funds in small morsels or to spend in other rare anonymous transactions (e.g. buying some gold bars from someone you trust won't reveal your identity).

In that case one might think you can just use Stealth Addresses (unlinkability) and run a full node to confirm receipt of funds anonymously. No need for Cryptonote, RingCT, nor ZeroCash. But the problem is the payer can be identified and be pressured to reveal your identity.

So this is why we need Zerocash to make the untraceability impervious to meta-data correlation.

But the problem with my proposal for ephemeral Zerocash mixers is that when we take the coins out of the mixer they can now be correlated to our meta-data (e.g. IP address, etc). So thus it seems to hide large funds and only take out small portions publicly as needed, will incur risk of losing those coins in my proposal, but at least they will be provably anonymous.

Anonymity is a clusterfuck. If we can't make trusted hardware, then anonymity is unprovable. Period.

So just give up on anonymity, or get busy trying to make hardware we can trust?

(or if Zerocash has developed a provably secure way to generate a master public key, which I doubt)


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: aminorex on January 21, 2016, 02:27:58 AM
DRM has nothing to do with it all. Thus I assume you don't understand the issue.

You are not giving him due credit. (AM is not a typical BTCT slouch.)  It is an allusion to "reflections on trusting trust" https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on January 21, 2016, 04:38:51 AM
Chip fabs are a very high capital investment. Thus they can't run away from the directives of the national security agencies.

That is certainly true. Indeed I personally distrust current chips and horde some older platforms that were much less feasible to backdoor in a manner that would be useful today. When they break or wear out I will lose that option though. Still there is a diversity even of current chips and they may not all be backdoored in an equivalent manner or by cooperating parties, if they all are at all.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on January 21, 2016, 04:49:20 AM
I would not entrust not receiving jail time on the assumption my meta-data can't be correlated, neither with Monero nor Bitcoin. The only anonymous things I would do would be legal things I want to hide from for example the public, but not from the NSA (and the employees of the NSA). In that case, I can do this reasonably well using Bitcoin.

I can't make the sources of my transaction untraceable with Bitcoin (unless I use some unreliable mixer, CoinJoin, or CoinShuffle), i.e. if someone wanted to premine and then make it impossible to connect them to the premined coins. So maybe we can argue that Cryptonote/Monero would help people who want to create scams. But decentralized exchanges might accomplish the same (not sure about that yet, still analyzing them).

Let's try to stop thinking like clueless anarchists and start to think like businessmen who want to market our products to real markets.

Would that be on topic here? I'm pretty sure the answer is no

This thread is about Monero technical ideas, discussion, ONLY. If you barge in here with something non-technical, I will delete it.

What's the deal with you and Monero anyway? You have  many of your own threads. Why not just discuss your broad ideas about marketing products to real markets there instead?



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 21, 2016, 05:43:06 AM
Do you have specific technical proposals how to improve Monero?

I am thinking best to adopt zk-snarks because (in high level conceptual thinking which could possibly be missing some key detail) it is more general to any sort of block chain contract (script) a business might want to do, not just currency. As well, I argued in the prior post that businesses will not likely adopt a privacy solution for currency which forces them to trust Tor or I2P to obscure their IP address meta-data (and that isn't even the only meta-data that will need to be obscured and some other meta-data might even be impossible to obscure).

Thus I argue if corporations are going to adopt privacy on block chains, they will choose zk-snarks in spite of your arguments that the masterkey must be created in a secure way. Corporations trust the institutions of society, e.g. the police, the courts, the government, etc.. Also afaics corporations are sort of a hierarchical structure with smaller companies serving larger companies, and so they are likely to fall in line to what larger corporations demand for interoption. What I am saying is that the top-down structure of corporatism means accepting that someone at the very top gets to create the masterkey for Zerocash.

Of course I would prefer an anonymity solution that has no caveats. But after all my study (and even inventions of anonymity solutions) I have learned there can't exist anonymity systems w/o caveats.

Thus if we are choosing which set of caveats (tradeoffs) we prefer to develop around, I think we must incorporate the needs of markets into our thought process.

I simply don't see any markets for Cryptonote style anonymity. I wish I could think of a market, but I can't. Whereas, zk-snarks seems to potentially have real business applications.

And that is damn unbiased opinion, because I have nothing to gain from zk-snarks. I have no knowledge of how to code them nor do I yet completely understand them. It doesn't give me any advantage whatsoever to come to the conclusion they are superior to develop around. Hell, I even have Zero Knowledge Transactions which is superior to RingCT which I had to abandon because I came to this realization. I am a loser too.

I don't know why you guys are so unable to discuss issues without freaking out. Smooth if you are truly diversified, then why can't you act more calm. Did you promise all the speculators that you were surety? Remember Proverbs says, "Don't be surety for another".

Look way back in 2014 when you launched Monero, I told you smooth and fluffypony that IP address correlation was the weakness. Fluffypony proceeded to try to integrate I2P. I warned you all many times that was not an adequate direction. But you wouldn't listen.

And now you attack me and are angry at me for trying to help you not waste more of your time and effort.

I simply don't understand you guys. Why can't you be more open-minded and also more amicable to people who want to discuss matters?

Is it pride? Somebody shot your baby.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on January 21, 2016, 05:47:27 AM
I don't know why you guys are so unable to discuss issues without freaking out. Smooth if you are truly diversified, then why can't you act more calm. Did you promise all the speculators that you were surety? Remember Proverbs says, "Don't be surety for another".

Nobody is freaking out, just trying to stay on topic, and I sure as hell never promised investors or speculators anything. I'm the one who gets shit for telling them their investment will probably go to zero, remember? Your comments about moving to zksnarks are on topic, so that's all fine.

Quote
Look way back in 2014 when you launched Monero, I told you smooth and fluffypony that IP address correlation was the weakness. Fluffypony proceed to try to integrate I2P. I warned you all many times that was not an adequate direction. But you wouldn't listen.

I2P, and even somewhat Tor, is perceived as adequate by 99% of the market. The remaining 1% may be smarter but isn't obviously much of a market at all. Very niche-y.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 21, 2016, 06:15:43 AM
Look way back in 2014 when you launched Monero, I told you smooth and fluffypony that IP address correlation was the weakness. Fluffypony proceed to try to integrate I2P. I warned you all many times that was not an adequate direction. But you wouldn't listen.

I2P, and even somewhat Tor, is perceived as adequate by 99% of the market. The remaining 1% may be smarter but isn't obviously much of a market at all. Very niche-y.

By the speculators because they are clueless.

But the corporations do not use darknets. They want privacy on the block chain, like we have disk encryption. Mention dark nets, illegal drug trade, etc, and they won't touch it with a 100 foot pole.

I would guess that many corporations do use Tor now for certain things. I2P will be integrated and invisible. No one will know or care how it works, except that the obvious network level vulnerabilities having to do with broadcasting transactions will be removed, and it will pass routine (though not intelligence agency level) technical muster for being private sufficient to satisfy most of the market. That's my opinion, and you are welcome to disagree.

Zerocash still needs IP obfuscation for a lot of private usages in practice too. They acknowledge it in the paper.

Zerocash does not need IP obfuscation when all the transactions are in the private zerocoins. Cite the section of the paper. I think you must be misunderstanding something. You are probably conflating the use of the regular non-anonymous coins mentioned in the paper.

Here you are making excuses again. Corporations are not going to trust unprovable shit. And moreover, mixnets are always vulnerable to flood attacks. They are very, very unreliable. Not only do I disagree, but I also think you are ignoring basic fundamental realities about the technologies.

Edit: arguing for Tor/I2P is akin to arguing for Dash's off chain mixing. Now look in the mirror and remember your arguments for End-to-End Principled ring sigs (versus off chain mixing) and realize the same logic applies to why Zerocash is superior to using off chain mixnets. Hypocrite.

Edit#2: okay I see the section you are referring to:

Quote
6.4 Additional anonymity considerations
Zerocash only anonymizes the transaction ledger. Network trac used to announce transactions,
retrieve blocks, and contact merchants still leaks identifying information (e.g., IP addresses). Thus
users need some anonymity network to safely use Zerocash. The most obvious way to do this is
via Tor [DMS04]. Given that Zerocash transactions are not low latency themselves, Mixnets (e.g.,
Mixminion [DDM03]) are also a viable way to add anonymity (and one that, unlike Tor, is not as
vulnerable to trac analysis). Using mixnets that provide email-like functionality has the added
bene t of providing an out-of-band noti cation mechanism that can replace
Receive
.
Additionally, although in theory all users have a single view of the block chain, a powerful
attacker could potentially fabricate an additional block
solely
for a targeted user. Spending any
coins with respect to the updated Merkle tree in this \poison-pill" block will uniquely identify the
targeted user. To mitigate such attacks, users should check with trusted peers their view of the
block chain and, for sensitive transactions, only spend coins relative to blocks further back in the
ledger (since creating the illusion for multiple blocks is far harder).

I will need to understand this attack better. Seems to me they are saying that you need to spend from a block where your pour transaction was the only transaction in the block. But the user would I think know this and thus not spend the coin any more. Thus I believe the anonymity remains provable without the use of any mixnet. I will need to understand this more deeply to be sure.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on January 21, 2016, 06:46:49 AM
Those are two separate issues.

Quote
Corporations are not going to trust unprovable shit

1. You can not prove that the properties of on-chain input mixing are unprovable. In fact, obviously some properties are definitely provable, so really the question is which ones.

2. I disagree with the above statement. They do so all the time. Cryptography itself isn't even provable beyond stated assumptions. And certainly not elliptic curve cryptography without which Zerocash does not exist (nor Cryptonote, but I'm told that Cryptonote is still mathematically stronger -- outside my expertise).

All this stuff is about using the "best" available tool where "best" is not a simple metric.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 21, 2016, 06:51:35 AM
Those are two separate issues.

They are saying the IP address is leaked. They are not saying it can be correlated to any transaction, except for that bizarre attack in the second point which seems to be detectable and only likely in a (near) majority hashrate attack. This appears to be a non-issue.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on January 21, 2016, 06:57:08 AM
Those are two separate issues.

They are saying the IP address is leaked. They are not saying it can be correlated to any transaction, except for that bizarre attack in the second point which seems to be detectable and only likely in a (near) majority hashrate attack. This appears to be a non-issue.

They are saying that nearly any practical use where privacy is desired will still require shielding the network layer to remain private. Partial solutions that hide connections between transactions on the chain but still put a big shining beacon all over your online activity (and these days almost all activity is online, even when you are "offline" and using a mobile device) are largely useless.

"Zerocash only anonymizes the transaction ledger"

Not good enough.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 21, 2016, 07:04:56 AM
Those are two separate issues.

Quote
Corporations are not going to trust unprovable shit

1. You can not prove that the properties of on-chain input mixing are unprovable. In fact, obviously some properties are definitely provable, so really the question is which ones.

Nonsense. The meta-data can be correlated. It is unprovable as to what of a myriad of scenarios will be correlated and not correlated. The entropy of the universe is unbounded.  ;)

2. I disagree with the above statement. They do so all the time. Cryptography itself isn't even provable beyond stated assumptions.

Very strong assumptions backed by a lot of math. And a lot very smart mathematicians and cryptographers trying to break the math.

If cryptography is broken, then society may stop functioning and we may regress several decades in living standards.

And certainly not elliptic curve cryptography without which Zerocash does not exist (nor Cryptonote, but I'm told that Cryptonote is still mathematically stronger -- outside my expertise).

Cryptonote is likely more mathematically well supported. Zerocash will indeed need to garnish more peer review for it to be as trusted as ECC.

All this stuff is about using the "best" available tool where "best" is not a simple metric.

Dash is better than Monero then! Come on smooth don't be a hypocrite. On chain anonymity is End-to-End Principled. Off chain mixing is not. That is fundamental and has been (one of) the argument(s) employed by Monero against Dash.

On chain anonymity is provable with math, except not for Cryptonote because the combinatorial analysis math is unfathomable and can't be expressed in a closed form. With Zerocash, the math of the anonymity set is simple; it is everyone. Every transaction in the universe is in your anonymity set in Zerocash.

Cryptonote isn't even close not even with orders-of-magnitude close. And that is not even factoring in the meta-data issue.

You are trying to equate a microbe to an elephant. The elephant is in your living room and you are denying it.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on January 21, 2016, 07:09:09 AM
2. I disagree with the above statement. They do so all the time. Cryptography itself isn't even provable beyond stated assumptions.

Very strong assumptions backed by a lot of math. And a lot very smart mathematicians and cryptographers trying to break the math.

If cryptography is broken, then society may stop functioning and we may regress several decades in living standards.

I didn't say it was broken, I said it was unprovable. Of course some cryptography is actually broken too, which is somewhat related.

Quote
And certainly not elliptic curve cryptography without which Zerocash does not exist (nor Cryptonote, but I'm told that Cryptonote is still mathematically stronger -- outside my expertise).

Cryptonote is likely more mathematically well supported. Zerocash will indeed need to garnish more peer review for it to be as trusted as ECC.

Not just a peer review issue. I'm told the math itself is actually weaker (necessarily assumptions are stronger). But I'll leave that to the mathematicians.

And no I don't agree that Tor and I2P are the same as Dash. Those tools are mature, based on well-understood principles that are proven to observe certain properties given other certain assumptions.

Not one of these layers is provably secure in all manners. Not CryptoNote, not Dash, not Tor, not I2P, not Zerocash, not ECC. You layer the pieces together and get a solution, ideally layering pieces that are well designed and have understood and desired properties. Dash is none of that, it just one guy (with at best weak qualifications) making everything up as he goes along.

Quote
On chain anonymity is provable with math, except not for Cryptonote because the combinatorial analysis math is unfathomable and can't be expressed in a closed form

Proof?

I think likely false. Random selection means definable properties and generally favorable adversarial properties too. I'm pretty sure many of these properties can be expressed in closed form.

Less true for the broader issues of privacy outside of the chain, but as we know every system has those issues. We pick the pieces with the most desired properties.

Or we decide the available components are not good enough for our personal goals and move on to doing something else with our life. If that sounds like a suggestion, it is, but meant as a sincere one, not, "Get lost".



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 21, 2016, 07:28:46 AM
2. I disagree with the above statement. They do so all the time. Cryptography itself isn't even provable beyond stated assumptions.

Very strong assumptionssupport backed by a lot of math. And a lot very smart mathematicians and cryptographers trying to break the math.

If cryptography is broken, then society may stop functioning and we may regress several decades in living standards.

I didn't say it was broken, I said it was unprovable. Of course some cryptography is actually broken too, which is somewhat related.

Do you always ignore my first sentence.

Number theoretic assumptions with strong support are not in the same class as the unprovable nature of meta-data correlation.

You are equating different categories which are not equal in risk. Not even orders-of-magnitude close in risk.

Quote
And certainly not elliptic curve cryptography without which Zerocash does not exist (nor Cryptonote, but I'm told that Cryptonote is still mathematically stronger -- outside my expertise).

Cryptonote is likely more mathematically well supported. Zerocash will indeed need to garnish more peer review for it to be as trusted as ECC.

Not just a peer review issue. I'm told the math itself is actually weaker (necessarily assumptions are stronger). But I'll leave that to the mathematicians.

The paper mentions I think 80 and 128-bit security levels, but I assume the bit security can be increased.

As for any alleged stronger number theoretic assumptions (thus weaker support and security assurances) with the bilinear pairings, I am also not expert enough in algebraic math to judge that.

And no I don't agree that Tor and I2P are the same as Dash. Those tools are mature, based on well-understood principles that are proven to observe certain properties given other certain assumptions.

Tor and I2P are fundamentally flawed if one is asking for provable reliability of their anonymity.

Ditto Dash.

The distinction is useless to the person who doesn't need anonymity 99% of the time and rather needs it 99.999999% of the time.

Not one of these layers is provably secure in all manners. Not CryptoNote, not Dash, not Tor, not I2P, not Zerocash.

Again you equate things which have orders-of-magnitude difference in risk level. Zerocash is orders-of-magnitude more provable (not counting any doubts about the peer review of the math).

You layer the pieces together and get a solution, ideally layering pieces that are well designed and have understood and desired properties. Dash is none of that, it just one guy making everything up as he goes along.

Layering 1 in 100 failure rate mixnexts with 1 in 10,000 failure rate Cryptonote means 1 in 100 failure rate. Layering doesn't help you with anonymity, because the LCD (weakest layer) dominates the outcome.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on January 21, 2016, 07:32:27 AM
Strong number theoretic assumptions are not in the same class as the unprovable nature of meta-data correlation.

If you are talking about meta-data on chain, then it is characterizable. If you are talking about metadata off chain, then all solutions must address it, or be confined to very limited use cases that don't involve accompanying interaction, such as (maybe) donations.

Quote
The distinction is useless to the person who doesn't need anonymity 99% of the time and rather needs it 99.999999% of the time.

Sounds like you haven't been paying attention to the part where Monero told you for the past two years that is isn't trying to be NSA proof.

This is the Technical Improvement thread though, so if you have ideas how to improve it to better approach that ideal (whether or not reaching it), please present them in technical form, otherwise this discussion will be winding down.

Quote
Layering 1 in 100 failure rate mixnexts with 1 in 10,000 failure rate Cryptonote means 1 in 100 failure rate. Layering doesn't help you with anonymity, cause the LCD (weakest layer) dominates the outcome.

In that case a user engaging in online activity that involves a mixnet for everything but payment, along with a 1 in a zillion Zerocash failure for payment, will also suffer from overall 1/100 failure rate. The numbers are made up of course.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 21, 2016, 07:37:26 AM
Strong number theoretic assumptions are not in the same class as the unprovable nature of meta-data correlation.

If you are talking about metadata off chain, then all solutions much address it.

Zerocash not. That has been my entire point. The on chain transactions can't be correlated to the off chain meta-data.

No wonder you guys are treating me like shit. You are completely clueless about these issues. Every single point you are wrong.

]The distinction is useless to the person who doesn't need anonymity 99% of the time and rather needs it 99.999999% of the time.

Sounds like you haven't been paying attention to the part where Monero told you for the past two years that is isn't trying to be NSA proof.

Sounds like you are not paying attention today, where I have asked you what those markets are and in every case I have explained they are tiny and/or Zerocash is preferred for the markets you suggested.

Layering 1 in 100 failure rate mixnexts with 1 in 10,000 failure rate Cryptonote means 1 in 100 failure rate. Layering doesn't help you with anonymity, cause the LCD (weakest layer) dominates the outcome.

In that case a user engaging in online activity that involves a mixnet for everything but payment along with a 1 in a zillion Zerocash failure will also suffer from overall 1/100 failure rate. The numbers are made up of course.

Incorrect.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on January 21, 2016, 07:46:19 AM
Strong number theoretic assumptions are not in the same class as the unprovable nature of meta-data correlation.

If you are talking about metadata off chain, then all solutions much address it.

Zerocash not. That has been my entire point. The on chain transactions can't be correlated to the off chain meta-data.

The ZeroCash developers disagree that is usefully sufficient, as do I. Let's just leave it at that.

As I said earlier:

Quote
This is the Technical Improvement thread though, so if you have ideas how to improve it to better approach that ideal (whether or not reaching it), please present them in technical form, otherwise this discussion will be winding down.

Your suggestion was merge the Zerocash protocol into Monero. Okay. I'm not even saying that will or won't happen.

Any other Technical Improvements to propose?




Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 21, 2016, 08:09:51 AM
Another point is layering Tor/I2P is another requirement on the use of the block chain which Zerocash doesn't force on the user.

The more layers you bind together in future's contracts, the less degrees-of-freedom the block chain has.

There are so many basic concepts that you all should have contemplated within the nearly 2 years since Monero was released.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on January 21, 2016, 12:53:32 PM
So can we conclude that Monero's underlying cryptonote technology will not be the best privacy technology forever?

Can we conclude that Monero is one of the few fully functioning private cryptocurrency networks currently?

Can we conclude that off chain data (ip addresses) are something that needs to be addressed for all private cryptocurrency networks?

Can we conclude that a possible technical improvement to Monero would be some kind of zero-proof knowledge thing?

TPTB, I commend your enthusiasm, but one of the problems I think in this conversation is a lack of brevity. No one has time to read ALL of this, so things are missed, and you get frustrated. If you want to have useful discussions, it's probably better to not have paragraphs of text, regardless of how much needs to be said. Writing 1 paragraph is much more difficult than writing 10 pages.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: Shrikez on January 21, 2016, 08:26:03 PM

I invest in people and the kind of people they attract and convince to embark on a common journey. That's why I like Monero and don't like Dash, to name one example.

Shrikez if ever you invest in something I was involved in or created, it won't be because you admired how I was able to bring a community of speculators together with some developers and manage the group-think circle-jerk.


Your and my perception of said group seem to differ. Of course there is some confirmation bias, it's human and you yourself are not free from it, can't be.

All the best, stay focused.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on January 21, 2016, 08:31:40 PM
My god this is boring. Let me know when you are done so I can repost the fusion block thing. I had some good asci diagrams.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 22, 2016, 12:15:50 AM
So can we conclude that Monero's underlying cryptonote technology will not be the best privacy technology forever?

Can we conclude that Monero is one of the few fully functioning private cryptocurrency networks currently?

Can we conclude that off chain data (ip addresses) are something that needs to be addressed for all private cryptocurrency networks?

Can we conclude that a possible technical improvement to Monero would be some kind of zero-proof knowledge thing?

TPTB, I commend your enthusiasm, but one of the problems I think in this conversation is a lack of brevity. No one has time to read ALL of this, so things are missed, and you get frustrated. If you want to have useful discussions, it's probably better to not have paragraphs of text, regardless of how much needs to be said. Writing 1 paragraph is much more difficult than writing 10 pages.

Off the top of my head to return the favor for you not deleting posts and I may be missing a few points:

  • zk-snarks can be used to make any script anonymous, not just currency as for CN/RingCT. Businesses will need this.
  • Anonymity of Zerocash (ZC) is never compromised by compromising the masterkey, only the coin supply is.
  • ZC makes the entire block chain a blob uncorrelated to meta-data, whereas CN/RingCT have distinct UTXO which can be so correlated.
  • ZC doesn't require Tor/I2P thus has more degrees-of-freedom and is End-to-End principled, whereas CN/RingCT are not.
  • Both ZC and CN/RingCT can lose anonymity or have undetectable increase in coin supply if the crypto is cracked.
  • CN/RingCT has the lowest common denominator anonymity which is usually I2P, i.e. maybe 99% vs 99.999% for ZC.
  • Businesses will favor the more provable, more End-to-End freedom choice of ZC.
  • I think the chance of jail time when using CN/RingCT for any action that the State doesn't want you to do, is very high. The anonymity is not robust, as I summarized above.
  • I can't think of any user adoption markets of any significant size of CN/RingCT, other than selling it to speculators. In other words, I view CN/RingCT as just another pump job albeit with some strong developers (who hopefully will get better leadership).
  • I am saying that CN/RingCT is not a viable technology. So arguing that it is the best we have for now, IMO doesn't make much sense, unless that is just a sales pitch to speculators (again keeping in mind the Securities Law and the Howey test in the USA and the implications of leading speculators into an investment with misleading prospectus and not registered with the SEC).

Edit: some of those points have finer points of contention. So review the long discussion for that.

For example, in the cases where one needs to use Tor/I2P with ZC, those transactions are often impossible to make anonymous by any means because they involve for example buying a product from a retailer who compiles with government regulations (KYC, etc).


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: DaveyJones on January 22, 2016, 12:28:21 AM
So can we conclude that Monero's underlying cryptonote technology will not be the best privacy technology forever?

Can we conclude that Monero is one of the few fully functioning private cryptocurrency networks currently?

Can we conclude that off chain data (ip addresses) are something that needs to be addressed for all private cryptocurrency networks?

Can we conclude that a possible technical improvement to Monero would be some kind of zero-proof knowledge thing?

TPTB, I commend your enthusiasm, but one of the problems I think in this conversation is a lack of brevity. No one has time to read ALL of this, so things are missed, and you get frustrated. If you want to have useful discussions, it's probably better to not have paragraphs of text, regardless of how much needs to be said. Writing 1 paragraph is much more difficult than writing 10 pages.

Off the top of my head to return the favor for you not deleting posts and I may be missing a few points:

  • zk-snarks can be used to make any script anonymous, not just currency as for CN/RingCT
  • Anonymity of Zerocash (ZC) is never compromised by compromising the masterkey, only the coin supply is.
  • ZC makes the entire block chain a blob uncorrelated to meta-data, whereas CN/RingCT have distinct UTXO which can be so correlated.
  • ZC doesn't require Tor/I2P thus has more degrees-of-freedom and is End-to-End principled, whereas CN/RingCT are not.
  • Both ZC and CN/RingCT can lose anonymity or have undetectable increase in coin supply if the crypto is cracked.
  • CN/RingCT has the lowest common denominator anonymity which is usually I2P, i.e. maybe 99% vs 99.999% for ZC.
  • Businesses will favor the more provable, more End-to-End freedom choice of ZC.
  • I think the chance of jail time when using CN/RingCT for any action that the State doesn't want you to do, is very high. The anonymity is not robust, as I summarized above.
  • I can't think of any user adoption markets of any significant size of CN/RingCT, other than selling it to speculators. In other words, I view CN/RingCT as just another pump job albeit with some strong developers (who hopefully will get better leadership).
  • I am saying that CN/RingCT is not a viable technology. So arguing that it is the best we have for now, IMO doesn't make much sense, unless that is just a sales pitch to speculators (again keeping in mind the Securities Law and the Howey test in the USA and the implications of leading speculators into an investment with misleading prospectus and not registered with the SEC).

As you digged deeper into the topic and talk about businesses adopting ZC rather than CN... does ZC have the option to be auditable? Real Businesses favor something that can be audited. Can you actually proof you own xxx Amount of ZC without handing over your whole keys? CN got Viewkey for that, what does ZC have? ( besides neither guesses, of what or what not businesses will adopt by us actually hold any fact or argument, as its not up to us but those who run the businesses.

Sorry for the bad english hope you get the points


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 22, 2016, 12:31:44 AM
As you digged deeper into the topic and talk about businesses adopting ZC rather than CN... does ZC have the option to be auditable? Real Businesses favor something that can be audited. Can you actually proof you own xxx Amount of ZC without handing over your whole keys? CN got Viewkey for that, what does ZC have? ( besides neither guesses, of what or what not businesses will adopt by us actually hold any fact or argument, as its not up to us but those who run the businesses.

Sorry for the bad english hope you get the points

Good point. Someone should check.

P.S. I edited my prior post.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on January 22, 2016, 12:34:28 AM
So can we conclude that Monero's underlying cryptonote technology will not be the best privacy technology forever?

Can we conclude that Monero is one of the few fully functioning private cryptocurrency networks currently?

Can we conclude that off chain data (ip addresses) are something that needs to be addressed for all private cryptocurrency networks?

Can we conclude that a possible technical improvement to Monero would be some kind of zero-proof knowledge thing?

TPTB, I commend your enthusiasm, but one of the problems I think in this conversation is a lack of brevity. No one has time to read ALL of this, so things are missed, and you get frustrated. If you want to have useful discussions, it's probably better to not have paragraphs of text, regardless of how much needs to be said. Writing 1 paragraph is much more difficult than writing 10 pages.

Off the top of my head to return the favor for you not deleting posts and I may be missing a few points:

  • zk-snarks can be used to make any script anonymous, not just currency as for CN/RingCT
  • Anonymity of Zerocash (ZC) is never compromised by compromising the masterkey, only the coin supply is.
  • ZC makes the entire block chain a blob uncorrelated to meta-data, whereas CN/RingCT have distinct UTXO which can be so correlated.
  • ZC doesn't require Tor/I2P thus has more degrees-of-freedom and is End-to-End principled, whereas CN/RingCT are not.
  • Both ZC and CN/RingCT can lose anonymity or have undetectable increase in coin supply if the crypto is cracked.
  • CN/RingCT has the lowest common denominator anonymity which is usually I2P, i.e. maybe 99% vs 99.999% for ZC.
  • Businesses will favor the more provable, more End-to-End freedom choice of ZC.
  • I think the chance of jail time when using CN/RingCT for any action that the State doesn't want you to do, is very high. The anonymity is not robust, as I summarized above.
  • I can't think of any user adoption markets of any significant size of CN/RingCT, other than selling it to speculators. In other words, I view CN/RingCT as just another pump job albeit with some strong developers (who hopefully will get better leadership).
  • I am saying that CN/RingCT is not a viable technology. So arguing that it is the best we have for now, IMO doesn't make much sense, unless that is just a sales pitch to speculators (again keeping in mind the Securities Law and the Howey test in the USA and the implications of leading speculators into an investment with misleading prospectus and not registered with the SEC).

As you digged deeper into the topic and talk about businesses adopting ZC rather than CN... does ZC have the option to be auditable? Real Businesses favor something that can be audited. Can you actually proof you own xxx Amount of ZC without handing over your whole keys? CN got Viewkey for that, what does ZC have? ( besides neither guesses, of what or what not businesses will adopt by us actually hold any fact or argument, as its not up to us but those who run the businesses.

Sorry for the bad english hope you get the points

ZC is very immature at this point. You can't even make payments with more than one output. No multisig or other sort of contracts, even simple ones. There is no "view key" though it has been mentioned that one could be added. For more complex contracts, the current approach will be infeasible for the foreseeable future (it is barely feasible for simple coin pours -- it takes 1 minute on a desktop). Eventually that stuff can be worked out (feasibility beyond a certainly point is not guaranteed though, just reasonable to expect eventually with technological advances), but we are talking about some indefinite future.

There is always going to be some better technology on the horizon. By the time Zerocash becomes more mature, there will likely be something else on the horizon that is superior in various ways, yet itself not mature. And so it continues.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 22, 2016, 12:37:40 AM
What is missing from your analysis smooth is that at what level of featureness are businesses willing to embrace block chains. I argue CN/RingCT is below the acceptable level and can not be raised to that level because the fundamentals are not End-to-End principled (also because can only make the payers, payees, and values obscured and not any type of script and other aspects of the block chain data). Business will prefer private databases where they can hide all the data until public block chains mature enough to do so. Public block chains promise more interoption and network effects, once we can make them truly private.

I try to light a fire under you guys to get you refocused on technology that can meet your goal of being a privacy block chain for businesses. That is where the real market is.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on January 22, 2016, 12:45:19 AM
What is missing from your analysis smooth is that at what level of featureness are businesses willing to embrace block chains. I argue CN/RingCT is below the acceptable level and can not be raised to that level because the fundamentals are not End-to-End principled (also because can only make the payers, payees, and values obscured and not any type of script and other aspects of the block chain data). Business will prefer private databases where they can hide all the data until public block chains mature enough to do so. Public block chains promise more interoption and network effects, once we can make them truly private.

There is no feasible technology to do non-trivial scripts using zksnarks at this time. It doesn't exist. Zerocash is pushing the limits already.

While there may be a market for zero knowledge smart contracts on a blockchain, that doesn't even matter because it can't be implemented.

Perhaps if you think that is the only market that exists you should just take a break and come back to the space in a few years and reevaluate.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 22, 2016, 12:54:00 AM
What is missing from your analysis smooth is that at what level of featureness are businesses willing to embrace block chains. I argue CN/RingCT is below the acceptable level and can not be raised to that level because the fundamentals are not End-to-End principled (also because can only make the payers, payees, and values obscured and not any type of script and other aspects of the block chain data). Business will prefer private databases where they can hide all the data until public block chains mature enough to do so. Public block chains promise more interoption and network effects, once we can make them truly private.

There is no feasible technology to do non-trivial scripts using zksnarks at this time. It doesn't exist. Zerocash is pushing the limits already.

While there may be a market for zero knowledge smart contracts on a blockchain, that doesn't even matter because it can't be implemented.

Perhaps if you think that is the only market that exists you should just take a break and come back to the space in a few years and reevaluate.

You are cherry picking points. zk-snarks scripts wasn't my only nor even my main justification.

Afaik, zk-snarks can implement any circuit if one accepts the proving time and verification time (there might also be some other resource constraint such as RAM but I think not), with proving time being much worse than verification time. And one would expect that it can be radically sped up with ASICs to enable more complex circuits to be verified in realistic time!

The point is there are very likely some simple scripts that can surely be done with zk-snarks in realistic times, and which are very useful for businesses interopting on the block chain. IoT is one likely candidate and probably many more.

"build it and they will come after 5 years" is a nice pitch to speculators, but in my line of work I had to produce a marketed product to earn an income. You worked in (programming for) finance (something you acknowledged recently in public post) thus  I assume you never had to do this. So I understand that in for-profit software the mantra is "ship it, sell it", otherwise projects go on and on and on and are never finished.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on January 22, 2016, 01:01:59 AM
What is missing from your analysis smooth is that at what level of featureness are businesses willing to embrace block chains. I argue CN/RingCT is below the acceptable level and can not be raised to that level because the fundamentals are not End-to-End principled (also because can only make the payers, payees, and values obscured and not any type of script and other aspects of the block chain data). Business will prefer private databases where they can hide all the data until public block chains mature enough to do so. Public block chains promise more interoption and network effects, once we can make them truly private.

There is no feasible technology to do non-trivial scripts using zksnarks at this time. It doesn't exist. Zerocash is pushing the limits already.

While there may be a market for zero knowledge smart contracts on a blockchain, that doesn't even matter because it can't be implemented.

Perhaps if you think that is the only market that exists you should just take a break and come back to the space in a few years and reevaluate.

You are cherry picking points. zk-snarks scripts wasn't my only nor even my main justification.

Afaik, zk-snarks can implement any circuit if one accepts the proving time and verification time (there might also be some other resource constraint such as RAM but I think not), with proving time being much worse than verification time. And one would expect that it can be radically sped up with ASICs to enable more complex circuits to be verified in realistic time!

All of this is entirely consistent with what I said about coming back in a few years. The proving and verification times for Zerocash, with the most simple scripts possible, are barely feasible to use, and even that might be disputed.

Quote
The point is there are very likely some simple scripts that can surely be done with zk-snarks in realistic times, and which are very useful for businesses interopting on the block chain. IoT is one likely candidate and probably many more.

Implementing custom scripts for specific use cases is way out of scope for Monero as I understand the project. But it is open source after all, so if interesting pull requests are submitted they would probably be merged (after sufficient testing and review). That could include "useful" scripts along with an open source zksnark library.

Quote
"build it and they will come after 5 years" is a nice pitch to speculators, but in my line of work I had to produce a marketed product to earn an income. You worked in (programming for) finance (something you acknowledged recently in public post) thus  I assume you never had to do this. So I understand that in for-profit software the mantra is "ship it, sell it", otherwise projects go on and on and on and are never finished.

Again consistent with take a break and come back when the technology is ready for a "build it, ship it, sell it" approach.

Also this is entirely irrelevant to Monero since Monero is an open source project not a product. So off topic for the thread. Please respect the thread starter, the forum, and the community and try to stay on topic.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on January 22, 2016, 01:07:55 AM
You say that zk-snarks can't do any worthwhile scripts, yet an entirely anonymous coin has been implemented it that is superior anonymity than CN/RingCT.

Your stubbornness is the main reason I can't work with you. Leadership requires symbiosis of ideas and directions. It requires vision.

Smooth it is quite evident that open source needs leadership. Direction is not likely to be driven by random contributor (if he was that capable, he would fork or otherwise start his own than battle against the leadership which is not that focused on the direction the contributor wants to go).

Edit: you apparently haven't even yet quantified the metrics, which is pretty lame for a competitor not to do.

Edit#2: perhaps if you all spent less time talking about market movements on the exchanges, and more time doing technical research and marketing research...


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: FreeTrade on January 22, 2016, 01:30:20 PM
Wondering if anyone has looked at the possibility of using the iPhone as a mining device. I understand that since the 5S, AES has been present in the hardware, AES hardware being essential for efficient cryptonight mining. A combined wallet/miner app could be very compelling. I'm thinking mining while charging overnight. Certainly require jailbreaking the device if it's possible at all.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on January 22, 2016, 11:54:56 PM
There was nothing on-topic ("Improvement Technical Discussion") in your last post, thus no need for any further response. Please stay on topic.


Yeah I didn't want to delete things because I just figured this would run its course eventually so might as well. Are we done?

So five pages ago I posted this


So I was reading this
http://www.scribd.com/doc/273443462/A-Transaction-Fee-Market-Exists-Without-a-Block-Size-Limit#scribd

and my thoughts started to drift when I encountered the concept that orphanization is one of the impediments to picking what to mine and the whole block size fee market debate etc...

Is there any work in this space regarding what could be called sister blocks, or fusion blocks?

Basically, the way I understand it (and granted, my assumptions could be flawed) is that there exists a set of transactions in the mempool. We'll just use 5 here

Trans1
Trans2
Trans3
Trans4
Trans5

If miner A decides to put 1,2,3 in his block (block A), and miner B decides to put 3,4,5 in his block (block B), they are both technically valid blocks (they both have the previous block's hash and contain valid transactions from the mempool). However, due to the nature of satoshi consensus, if block A makes it into the chain first, block B becomes orphan - even though it is entirely valid.

It's even easier to understand the inefficiency of satoshi consensus if block A has 1,2,3 and block B has 4,5. In this case, there's really no reason both blocks aren't valid.

I see now as I continue to think about this the problem lies in the transaction fees associated with each transaction, for if they exist in two blocks, which block finder gets the reward? But this isn't an intractable problem.

Essentially what I'm thinking is that you can imagine these two blocks existing as blebs on the chain.
                             .
._._._._._._._._._./
                            \,

each dot is a block, and the comma indicates a sister block
in current protocol, this would happen
                             ._._._
._._._._._._._._._./
                            \,_,

And eventually one chain would grow longer (which is ultimately influenced by bandwidth) and the entire sister chain would be dropped, and if your node was on that chain you'd experience a reorg (right?).

why couldn't something be implemented where the above fork turns into a bleb

                            .
._._._._._._._._._./\.
                            \,/

which is eventually resolved to a fusion block

._._._._._._._._._._!_._


where the ! indicates a fusion block. When encountering a potential orphan scenario (daemon receives two blocks in close proximity, or already has added a block but then receives a similar block for the same block height) instead of the daemon rejecting one as orphan, it scans the sister block as a candidate for fusion. There would be some parameters (X% of transactions overlap, only concurrent block height are candidates (this is effectively the time window)). As part of this, the system would somehow need to be able to send transaction fees to different blockfinders, but again this seems tractable (though I await to be schooled as to why its not possible). In addition, the block reward itself would need to be apportioned.

Or is this what a reorg does? The way I understand reorgs, this is different than a reorg.

Though upon creation of the fusion block a reorganization would have to occur. So at the cost of overall bandwidth we provide a countermeasure for the loss of economic incentive for large blocks.

And one problem to address is that you would need a new block header for the fusion block, but this could really just be the hash of the two sister blocks. Both sisters are valid, therefore the hash of those valid blocks is valid.

Ok back to work.


Is the above possible or am I crazy?

Edited to add: I will be enforcing the moderation again. Please talk technical improvements to monero.

First here is a link to Peter R's paper without all the marketing clutter generated by Scribd http://www.bitcoinunlimited.info/downloads/feemarket.pdf (http://www.bitcoinunlimited.info/downloads/feemarket.pdf).

My take is that the solution to this issue is to mitigate it by creating a proper fee market in Monero. This is something I am actually very interested in working on. In such a scenario the blocks in the memory pool would have an order of fees per KB and a rational miner would prioritize blocks in order of return while the sender would pay a fee depending on the priority desired for a transaction. The existing penalty function for oversize blocks in Monero provides a start but it and the fee structure will likely, have to be optimized in such a way that minimizes the cost to legitimate users while maximizing the cost of spam and attacks.  


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on January 23, 2016, 03:13:19 AM
There was nothing on-topic ("Improvement Technical Discussion") in your last post, thus no need for any further response. Please stay on topic.


(snipped fusion block concept, left the meat of it)

why couldn't something be implemented where the above fork turns into a bleb

                            .
._._._._._._._._._./\.
                          \,/

which is eventually resolved to a fusion block

._._._._._._._._._._!_._



First here is a link to Peter R's paper without all the marketing clutter generated by Scribd http://www.bitcoinunlimited.info/downloads/feemarket.pdf (http://www.bitcoinunlimited.info/downloads/feemarket.pdf).

My take is that the solution to this issue is to mitigate it by creating a proper fee market in Monero. This is something I am actually very interested in working on. In such a scenario the blocks in the memory pool would have an order of fees per KB and a rational miner would prioritize blocks in order of return while the sender would pay a fee depending on the priority desired for a transaction. The existing penalty function for oversize blocks in Monero provides a start but it and the fee structure will likely, have to be optimized in such a way that minimizes the cost to legitimate users while maximizing the cost of spam and attacks.  

I don't know if we're talking about the same issue.  I'f I'm not mistaken the bolded above is actually quite possible and could already be implemented with the existing code. My question revolves around this: In a world where monero is the blockchain and has very high activity, these large blocks that will be created will probably cause centralization to occur on infrastructure that has high bandwidth. This is simply because those nodes connected to each other on wide pipes will be able to sling 200 MB blocks to each other. The first miner to get a new 200 MB block has the best chance of making the next block. Thats satoshi consensus, right?

To keep it decentralized, we need to find a way to keep the narrow-pipe miners in the game. The fusion block concept above may do that. Actually, the fusion block with the light block concept would really do the trick.

So ultimately, yes, the fee market will exist. But even with a fee market we still run into the centralization problem. Thus, instead of slinging 200 MB blocks around, a blocksolver can send the blockindex in advance (I've described this concept in the lightblock thing) which is significantly smaller, and a miner having received this blockindex can immediately create an alternative block that doesn't include these transactions, and either attempt to create a fusion block or the next block.

And finally, is there any reason why a miner can't be working on different blocks simultaneously?


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on January 23, 2016, 09:29:38 AM
And finally, is there any reason why a miner can't be working on different blocks simultaneously?

Yes, the nature of proof of work is such that you can only work on one block at a time. Of course you can multitask, but the hash rate would just be lower on each one, so I'm not sure why you would want to do that.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on January 23, 2016, 02:10:11 PM
And finally, is there any reason why a miner can't be working on different blocks simultaneously?

Yes, the nature of proof of work is such that you can only work on one block at a time. Of course you can multitask, but the hash rate would just be lower on each one, so I'm not sure why you would want to do that.


Okay. So it would be a completely different approach / context than merged mining? I was thinking merged mining might be possible with self.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on January 23, 2016, 08:57:04 PM
And finally, is there any reason why a miner can't be working on different blocks simultaneously?

Yes, the nature of proof of work is such that you can only work on one block at a time. Of course you can multitask, but the hash rate would just be lower on each one, so I'm not sure why you would want to do that.


Okay. So it would be a completely different approach / context than merged mining? I was thinking merged mining might be possible with self.

I suppose you could possibly do that but I don't understand the purpose. Merged mining doesn't really add any security unless there are added rewards to mining (which would then increase the hash rate).



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on January 23, 2016, 10:36:01 PM
And finally, is there any reason why a miner can't be working on different blocks simultaneously?

Yes, the nature of proof of work is such that you can only work on one block at a time. Of course you can multitask, but the hash rate would just be lower on each one, so I'm not sure why you would want to do that.


Okay. So it would be a completely different approach / context than merged mining? I was thinking merged mining might be possible with self.

I suppose you could possibly do that but I don't understand the purpose. Merged mining doesn't really add any security unless there are added rewards to mining (which would then increase the hash rate).



The goal of fusion blocks (and any offshoot) isn't to explicitly increase security, it is a countermeasure for orphanization, which I think is the primary reason large blocks are feared to cause centralization. You fix the orphanization problem, you fix the largeblock decentralization problem.

As detailed above, if there are 10 transactions in the pool, you could (with the right software and protocol mods) simultaneously make 2 blocks - 1 block has transactions 1 through 5, the second block has transactions 6 - 10. If you find a solution for 6-10, you broadcast the block, add it to the chain etc. However, it turns out your on a high-lag part of the network, so while you found your 6-10 block solution, someone else found a solution for 1-5. What I'm calling for is a protocol mod that would allow the daemon to recognize that two blocks are competing for the same blockheight but both are valid so they should be fused.

Merged mining of self is just a piece of this.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on January 23, 2016, 10:48:06 PM
And finally, is there any reason why a miner can't be working on different blocks simultaneously?

Yes, the nature of proof of work is such that you can only work on one block at a time. Of course you can multitask, but the hash rate would just be lower on each one, so I'm not sure why you would want to do that.


Okay. So it would be a completely different approach / context than merged mining? I was thinking merged mining might be possible with self.

I suppose you could possibly do that but I don't understand the purpose. Merged mining doesn't really add any security unless there are added rewards to mining (which would then increase the hash rate).



The goal of fusion blocks (and any offshoot) isn't to explicitly increase security, it is a countermeasure for orphanization, which I think is the primary reason large blocks are feared to cause centralization. You fix the orphanization problem, you fix the largeblock decentralization problem.

As detailed above, if there are 10 transactions in the pool, you could (with the right software and protocol mods) simultaneously make 2 blocks - 1 block has transactions 1 through 5, the second block has transactions 6 - 10. If you find a solution for 6-10, you broadcast the block, add it to the chain etc. However, it turns out your on a high-lag part of the network, so while you found your 6-10 block solution, someone else found a solution for 1-5. What I'm calling for is a protocol mod that would allow the daemon to recognize that two blocks are competing for the same blockheight but both are valid so they should be fused.

It is also the case that if one block includes transactions 1,2,3,4, and 5, while another block includes 3,4,5,6, and 7, they can still be merged as long as transactions 3,4, and 5 are identical. So with this observation you can just pick any subset of transactions you want.

There is an issue that such blocks can be incompatible if they include incompatible (double spend) transactions. Then you get into what sort of rules to use to resolve the conflict and what sorts of attacks that allows.

There are various people working on ideas like that.

See "Breaking the chains of blockchain protocols" and "Braiding the chain" here: https://scalingbitcoin.org/hongkong2015/#presentations


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on January 23, 2016, 11:35:52 PM
And finally, is there any reason why a miner can't be working on different blocks simultaneously?

Yes, the nature of proof of work is such that you can only work on one block at a time. Of course you can multitask, but the hash rate would just be lower on each one, so I'm not sure why you would want to do that.


Okay. So it would be a completely different approach / context than merged mining? I was thinking merged mining might be possible with self.

I suppose you could possibly do that but I don't understand the purpose. Merged mining doesn't really add any security unless there are added rewards to mining (which would then increase the hash rate).



The goal of fusion blocks (and any offshoot) isn't to explicitly increase security, it is a countermeasure for orphanization, which I think is the primary reason large blocks are feared to cause centralization. You fix the orphanization problem, you fix the largeblock decentralization problem.

As detailed above, if there are 10 transactions in the pool, you could (with the right software and protocol mods) simultaneously make 2 blocks - 1 block has transactions 1 through 5, the second block has transactions 6 - 10. If you find a solution for 6-10, you broadcast the block, add it to the chain etc. However, it turns out your on a high-lag part of the network, so while you found your 6-10 block solution, someone else found a solution for 1-5. What I'm calling for is a protocol mod that would allow the daemon to recognize that two blocks are competing for the same blockheight but both are valid so they should be fused.

It is also the case that if one block includes transactions 1,2,3,4, and 5, while another block includes 3,4,5,6, and 7, they can still be merged as long as transactions 3,4, and 5 are identical. So with this observation you can just pick any subset of transactions you want.

There is an issue that such blocks can be incompatible if they include incompatible (double spend) transactions. Then you get into what sort of rules to use to resolve the conflict and what sorts of attacks that allows.

There are various people working on ideas like that.

See "Breaking the chains of blockchain protocols" and "Braiding the chain" here: https://scalingbitcoin.org/hongkong2015/#presentations

marvelous, thanks for the leads.

I think in general, for the bolded, it would be kept as simple as possible. That braid paper seems to have really taken this to an extreme. Ultimately, I think orphanization should still be possible, it should just be less likely to occur. Braiding, or fusion, or whatver its called, should only be allowed to happen at fresh blockheights. Not deep. Cohorts / siblings? too much. Or thats an eventuality that can be figured out later.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on January 24, 2016, 12:01:37 AM
And finally, is there any reason why a miner can't be working on different blocks simultaneously?

Yes, the nature of proof of work is such that you can only work on one block at a time. Of course you can multitask, but the hash rate would just be lower on each one, so I'm not sure why you would want to do that.


Okay. So it would be a completely different approach / context than merged mining? I was thinking merged mining might be possible with self.

I suppose you could possibly do that but I don't understand the purpose. Merged mining doesn't really add any security unless there are added rewards to mining (which would then increase the hash rate).



The goal of fusion blocks (and any offshoot) isn't to explicitly increase security, it is a countermeasure for orphanization, which I think is the primary reason large blocks are feared to cause centralization. You fix the orphanization problem, you fix the largeblock decentralization problem.

As detailed above, if there are 10 transactions in the pool, you could (with the right software and protocol mods) simultaneously make 2 blocks - 1 block has transactions 1 through 5, the second block has transactions 6 - 10. If you find a solution for 6-10, you broadcast the block, add it to the chain etc. However, it turns out your on a high-lag part of the network, so while you found your 6-10 block solution, someone else found a solution for 1-5. What I'm calling for is a protocol mod that would allow the daemon to recognize that two blocks are competing for the same blockheight but both are valid so they should be fused.

It is also the case that if one block includes transactions 1,2,3,4, and 5, while another block includes 3,4,5,6, and 7, they can still be merged as long as transactions 3,4, and 5 are identical. So with this observation you can just pick any subset of transactions you want.

There is an issue that such blocks can be incompatible if they include incompatible (double spend) transactions. Then you get into what sort of rules to use to resolve the conflict and what sorts of attacks that allows.

There are various people working on ideas like that.

See "Breaking the chains of blockchain protocols" and "Braiding the chain" here: https://scalingbitcoin.org/hongkong2015/#presentations

marvelous, thanks for the leads.

I think in general, for the bolded, it would be kept as simple as possible. That braid paper seems to have really taken this to an extreme. Ultimately, I think orphanization should still be possible, it should just be less likely to occur. Braiding, or fusion, or whatver its called, should only be allowed to happen at fresh blockheights. Not deep. Cohorts / siblings? too much. Or thats an eventuality that can be figured out later.

I think you will find that the added complexity is needed to prevent various attacks where miners can get extra credit beyond what they deserve based on work done, or get more control of the chain with less hash power.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on January 24, 2016, 01:35:02 AM
And finally, is there any reason why a miner can't be working on different blocks simultaneously?

Yes, the nature of proof of work is such that you can only work on one block at a time. Of course you can multitask, but the hash rate would just be lower on each one, so I'm not sure why you would want to do that.


Okay. So it would be a completely different approach / context than merged mining? I was thinking merged mining might be possible with self.

I suppose you could possibly do that but I don't understand the purpose. Merged mining doesn't really add any security unless there are added rewards to mining (which would then increase the hash rate).



The goal of fusion blocks (and any offshoot) isn't to explicitly increase security, it is a countermeasure for orphanization, which I think is the primary reason large blocks are feared to cause centralization. You fix the orphanization problem, you fix the largeblock decentralization problem.

As detailed above, if there are 10 transactions in the pool, you could (with the right software and protocol mods) simultaneously make 2 blocks - 1 block has transactions 1 through 5, the second block has transactions 6 - 10. If you find a solution for 6-10, you broadcast the block, add it to the chain etc. However, it turns out your on a high-lag part of the network, so while you found your 6-10 block solution, someone else found a solution for 1-5. What I'm calling for is a protocol mod that would allow the daemon to recognize that two blocks are competing for the same blockheight but both are valid so they should be fused.

It is also the case that if one block includes transactions 1,2,3,4, and 5, while another block includes 3,4,5,6, and 7, they can still be merged as long as transactions 3,4, and 5 are identical. So with this observation you can just pick any subset of transactions you want.

There is an issue that such blocks can be incompatible if they include incompatible (double spend) transactions. Then you get into what sort of rules to use to resolve the conflict and what sorts of attacks that allows.

There are various people working on ideas like that.

See "Breaking the chains of blockchain protocols" and "Braiding the chain" here: https://scalingbitcoin.org/hongkong2015/#presentations

marvelous, thanks for the leads.

I think in general, for the bolded, it would be kept as simple as possible. That braid paper seems to have really taken this to an extreme. Ultimately, I think orphanization should still be possible, it should just be less likely to occur. Braiding, or fusion, or whatver its called, should only be allowed to happen at fresh blockheights. Not deep. Cohorts / siblings? too much. Or thats an eventuality that can be figured out later.

I think you will find that the added complexity is needed to prevent various attacks where miners can get extra credit beyond what they deserve based on work done, or get more control of the chain with less hash power.


Very likely. I doubt in my random musings I'm going to stumble into obvious answers. In general though its good to see people working on this in bitcoin. Should be easy to integrate into other cryptocurrencies, especially those where network latency will really cause problems (e.g., adaptive blocksize like ours).

Is the bolded related to an instance where A is transaction 1,2,3,4,5 and A' is 5,6?

And blocks don't need to have multiple parents to get rid of orphans. What this whole thing depends on is the frequency of orphanization, and I don't know what it is, but I doubt its 100%. These two papers seek to replace blocks with something else. I think something can be made that coexists with blocks. I.e., you have A and A'. Both are valid. So you then add a meta-block (hashes those two blocks) that describes the details "block A and A' are both ok. treat the union set as single transactions, treat those outside the union set as single single transactions". All of this might actually be done at the re-org level.

I guess what you might be getting at is the problem of "well if A' is found, and then that propagates, and then A'+1 is found before A and A' are sewn together into a fusion block, then what?" I think the simple answer is that A'+1 then gets orphaned.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on February 23, 2016, 02:41:50 AM
There are ways around this, such as out-of-band signaling, or recipient-provided keys. There are different trade-offs, though, and I haven't yet found the "perfect" one.

I figured you could just scan the blockchain without saving it. So basically find a way to get the daemon to synchronize with the network without saving the blockchain, and get simplewallet to scan those dloaded blocks as the daemon synchronizes.

Of course you can do that but (with some recognition of MKN's laws) does it really make sense to expect someone on a cell phone to receive and scan every transaction being generated by millions or billions of people? Probably not.

My personal favorite for in-person transactions is that the sender signs the transaction, transmits it locally to the recipient via QR/NFC/BT/etc., and the recipient transmits it to the blockchain. In doing so the recipient now knows which transactions require confirmation and scanning. The rest can be ignored. As luigi said though, there are multiple approaches that can work for different use cases.



In the original post, he didn't explicitly mention the user would be on a phone. I'm imagining a scenario where the blockchain is 600 gigs.. you surely don't need to keep all of it on your home PC. I think this has all been hashed out here: ... well, I can't find it on the forum.getmonero.org site. Basically some thread discussed different node setups - archival nodes (whole blockchain) vs. light nodes (part of the blockchain).

The in-person thing is a different boat with similar problems.



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on February 23, 2016, 03:02:19 AM
There are ways around this, such as out-of-band signaling, or recipient-provided keys. There are different trade-offs, though, and I haven't yet found the "perfect" one.

I figured you could just scan the blockchain without saving it. So basically find a way to get the daemon to synchronize with the network without saving the blockchain, and get simplewallet to scan those dloaded blocks as the daemon synchronizes.

Of course you can do that but (with some recognition of MKN's laws) does it really make sense to expect someone on a cell phone to receive and scan every transaction being generated by millions or billions of people? Probably not.

My personal favorite for in-person transactions is that the sender signs the transaction, transmits it locally to the recipient via QR/NFC/BT/etc., and the recipient transmits it to the blockchain. In doing so the recipient now knows which transactions require confirmation and scanning. The rest can be ignored. As luigi said though, there are multiple approaches that can work for different use cases.

In the original post, he didn't explicitly mention the user would be on a phone. I'm imagining a scenario where the blockchain is 600 gigs.. you surely don't need to keep all of it on your home PC. I think this has all been hashed out here: ... well, I can't find it on the forum.getmonero.org site. Basically some thread discussed different node setups - archival nodes (whole blockchain) vs. light nodes (part of the blockchain).

The in-person thing is a different boat with similar problems.

Of course you don't have to keep the whole blockchain. Pruning is fine, but it only addresses the storage component and doesn't help with the bandwidth and scanning (or more generally "processing") issues at all.



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on February 25, 2016, 07:29:17 AM
I didn't intend to post in this thread again, but seems I remember Monero would soon add multi-sig, and I wanted to make you aware of a potential 51% attack hole enabled by multi-sig:

https://bitcointalk.org/index.php?topic=1364951.msg14002317#msg14002317


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on February 27, 2016, 05:09:43 AM
I didn't intend to post in this thread again, but seems I remember Monero would soon add multi-sig, and I wanted to make you aware of a potential 51% attack hole enabled by multi-sig:

https://bitcointalk.org/index.php?topic=1364951.msg14002317#msg14002317

Thanks


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on March 29, 2016, 08:04:35 PM
I was doing some research on long term storage solutions for work and came across the notion that the factor that limits a SSD's life is the number of writes performed. Reading from a cell doesn't really kill the cell, its the need to write to a cell that will eventually kill it.

So, my question is whether LMDB currently (or can be modified to) only write to the database once for a given entry.

My gut tells me that - yeah - because the blockchain is just a database, then once your node is sync'd up, everything is written. However, the innerworkings of LMDB are unknown to me - so perhaps it has to rewrite certain locations?


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: hyc on April 05, 2016, 10:44:16 AM
I was doing some research on long term storage solutions for work and came across the notion that the factor that limits a SSD's life is the number of writes performed. Reading from a cell doesn't really kill the cell, its the need to write to a cell that will eventually kill it.

So, my question is whether LMDB currently (or can be modified to) only write to the database once for a given entry.

My gut tells me that - yeah - because the blockchain is just a database, then once your node is sync'd up, everything is written. However, the innerworkings of LMDB are unknown to me - so perhaps it has to rewrite certain locations?

Reading from a cell can wear it out too, it just takes longer to happen.

LMDB is a copy-on-write design, so it almost never overwrites existing pages. It is actually proven to be more SSD/flash-friendly than most other DB designs in existence.

Here's some relevant work on a flash-optimized database, compared against LevelDB - LevelDB is over 70x worse in terms of write amplification and data wearout. https://www.usenix.org/conference/atc15/technical-session/presentation/marmol

(LMDB is not compared in this work, but we have other tests that show LMDB's write amplification is far smaller than LevelDB's http://symas.com/mdb/#bench )

Relax. There is nothing going on in the database world that LMDB hasn't already solved.

I've been developing systems on SSDs for over a decade, I encountered and solved these wearout problems long before any other DB authors even knew they existed. LMDB was designed for solid state storage.

http://forums.storagereview.com/index.php/topic/22805-does-mechanical-storage-have-a-future/#entry230060
http://forums.storagereview.com/index.php/topic/24749-160-gb-flash-ssd-anounced/#entry240229


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on April 05, 2016, 10:59:39 AM
Random writes can wear out the SSD much more rapidly is because it causes entire sectors to have to be cleared and rewritten (potentially even moved) even if only one bit is changed in the sector. Sequential writes are much healthier for the SSD.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: hyc on April 05, 2016, 11:18:47 AM
Random writes can wear out the SSD much more rapidly is because it causes entire sectors to have to be cleared and rewritten (potentially even moved) even if only one bit is changed in the sector. Sequential writes are much healthier for the SSD.

Yes, but that's only part of the story. Flash sectors are 512 bytes each, LMDB writes in pages (4KB on common systems). So right off the top, the wear out factor is reduced by a factor of 8. Also LMDB is copy-on-write so rewriting of individual pages is a rare occurrence; rewriting of single random sectors is a non-occurrence. The one factor we can't control for is filesystem and partition alignment - if your disk layout doesn't line up with a multiple of 2MB, you're probably fragmenting all your accesses. This is the essential part of using SSDs - flash memory can be read and written in sectors, but erases can only be done in erase blocks that are commonly 2 or 4 MB in size. The insane thing is that for years, flash drive vendors shipped their devices with a default disk geometry of 255 heads, 63 sectors/track, which gives you cylinders of 16065 sectors per cylinder. Even though disk devices generally use LBA instead of CHS addressing, disk partitioning tools and filesystems still use cylinders. I always reformat to 256 heads, 32 sectors/track which gives 8192 sector cylinders: 4MB. A misaligned partition can cause major performance degradation and accelerated wearout.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on April 10, 2016, 06:24:47 PM
xpost: https://forum.getmonero.org/4/academic-and-technical/2525/enhancing-the-auditability-of-monero-transactions

Over the past couple of weeks, I've seen at least two instances where its become apparent the auditability of Monero transactions is inadequate, and this is a common critique of Monero for those that dive past the limitations of the viewkey. It's incredibly ironic that auditability is an issue, but it does make sense. There needs to be a trustless way to share your financial information if you choose to do so.

To date, there are two mechanisms: the view key, which has the fault of only showing incoming transactions, and the databasing by simplewallet of transaction history, which has the fault of relying on trust. "I trust that you have logged all of the transactions associated with this account."

I don't have a solution, but I'm hoping that we can collectively start thinking about a solution. Like I've said before, I don't understand the cryptography enough to understand why its difficult to view outgoing transactions without the ability to sign them.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on April 10, 2016, 06:52:00 PM
I don't have a solution, but I'm hoping that we can collectively start thinking about a solution. Like I've said before, I don't understand the cryptography enough to understand why its difficult to view outgoing transactions without the ability to sign them.

Viewing the outgoing transactions would break the rings for others, not just own, by reducing the anonymity sets for example if the entity you provided the viewkey to was a very popular entity to provide viewkeys to.

Other than that, I think it would be technically possible to make a viewkey for outgoing transactions by having two private keys and then proving in zero knowledge that the two ring signatures were equivalent, without giving up your private key which has power to spend your outputs.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on April 10, 2016, 08:58:30 PM
Over the past couple of weeks, I've seen at least two instances where its become apparent the auditability of Monero transactions is inadequate

It isn't entirely clear to me whether it is or not. Is auditability of cash or gold inadequate?

The auditability comes from surrounding processes and not from the asset itself.

Maybe it is a mistake to try to push audibility too far into the system itself. The current receive view keys might be a reasonable middle ground, because with one it can be proved that you received some funds. It is then up to you to show what you did with them, and if you can't you can be held responsible (for embezzlement, etc.)


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on April 11, 2016, 04:39:32 AM
Over the past couple of weeks, I've seen at least two instances where its become apparent the auditability of Monero transactions is inadequate

It isn't entirely clear to me whether it is or not. Is auditability of cash or gold inadequate?

The auditability comes from surrounding processes and not from the asset itself.

Maybe it is a mistake to try to push audibility too far into the system itself. The current receive view keys might be a reasonable middle ground, because with one it can be proved that you received some funds. It is then up to you to show what you did with them, and if you can't you can be held responsible (for embezzlement, etc.)

Valid points - separating the asset from the rest of the mechanisms. I would argue, then, as have others, that we need to come up with a better way to describe the actual auditibility capacities of the extant monero network. I was under the impression, until I learned better, that Monero was private / opaque by default, but then you could turn it into a bitcoin-style thing if you gave someone the ability to do so. I think the term "viewkey" has this baked into it "oh, now I can view everything, where before I couldn't, because cryptography".

I just don't like the bolded part because its more work, and will probably be the number 1 reason future monerians would use a centralized service - to keep them compliant by keeping all their "receipts" in order. I guess ultimately its a matter of keeping your wallet.bin safe and noncorrupted once this becomes a stable thing. But i've gone through at least 5 wallet.bins for every one of my accounts by now.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on April 11, 2016, 05:43:36 AM
I guess ultimately its a matter of keeping your wallet.bin safe and noncorrupted once this becomes a stable thing. But i've gone through at least 5 wallet.bins for every one of my accounts by now.

Have you seen the corruption recently? I think the wallet saving was changed to not overwrite in place within the past several months, which should eliminate most off the corruption. In time it will be replaced with a database.




Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on April 11, 2016, 11:36:19 AM
As I was dosing off last night I remembered a possible point.... this puts a level of trust back into the system, which seems anathema for a trustless currency system. Unless we come up with a way to decentralize the auditing mechanism?

i dunno ... we share hashes of our wallets ?

fresh morning thoughts. possibly useless.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on April 19, 2016, 03:00:29 AM
Ye olde selection of outputs problem. We'll call this the Monero problem. Maybe it will go down in the books like the byzantine generals problem or whatever.

I have a list of things. Some of the things in this list are mine - I own them. I want to use some of my things in this list without an observer knowing which of the things I'm using are actually mine. Therefore, I select my thing and some others as decoys, so that the observer doesn't know which of the things is mine. However, the nature of the things is such that once they are used by the true owner, they can not be used again. Thus, older things in this list have a higher probability that they have already been used and are only being used as decoys, though there is no way to determine whether a thing has been used.

Thus, how do I select my set of decoys to have the highest probability of appearing to be unused?

In triangular distribution (I think this is what is currently used in Monero), as demonstrated by mWo12 here: http://pastebin.com/raw/4TzcF9b9 , seems to use the pattern of

1. recent output - highly likely not used: seen as the columnar pattern on the far right.
2. then as far as I can tell, completely random selection throughout the blockchain. Unknown what the probability of usage actually is.

Probability of Usage - this is probably something that we could define somehow. Out of band information can tell us that probability of usage increases with more users of the blockchain, but how this manifests in the blockchain is a different beast.



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: john-connor on April 19, 2016, 08:47:39 AM
How does Monero propose to resolve the fact that it's blockchain is growing faster than Moores Law and pruning is so limited? Are there any proposals for a solution to the new faulty database that can easily fail an SSD? So far I've failed two during initial download at the 50% mark and the disk writes were well into the 100's of gigabytes(what is this database doing?).


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: TPTB_need_war on April 19, 2016, 11:06:51 AM
How does Monero propose to resolve the fact that it's blockchain is growing faster than Moores Law and pruning is so limited?

Human populations don't grow faster than Moore's law. Duh.

Disk arrays scale to anything we can fathom.

The issue is that no block chain consensus can maintain decentralization of validation, not because of scaling problems but because of the fundamental economic reality that not every miner can have an equal share of the hashrate, thus verification costs are not shared equally. The creates an asymmetry where economies-of-scale will maximize profit and grow hashrate the fastest thus centralizing mining.

The solution requires some clever innovation on proof-of-work.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: hyc on April 20, 2016, 01:51:23 AM
How does Monero propose to resolve the fact that it's blockchain is growing faster than Moores Law and pruning is so limited? Are there any proposals for a solution to the new faulty database that can easily fail an SSD? So far I've failed two during initial download at the 50% mark and the disk writes were well into the 100's of gigabytes(what is this database doing?).

To be blunt, you're full of s#it.

https://gist.github.com/hyc/33f3eec6bae83246209d


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: generalizethis on April 20, 2016, 05:44:38 AM
How does Monero propose to resolve the fact that it's blockchain is growing faster than Moores Law and pruning is so limited?

Human populations don't grow faster than Moore's law. Duh.

Disk arrays scale to anything we can fathom.

The issue is that no block chain consensus can maintain decentralization of validation, not because of scaling problems but because of the fundamental economic reality that not every miner can have an equal share of the hashrate, thus verification costs are not shared equally. The creates an asymmetry where economies-of-scale will maximize profit and grow hashrate the fastest thus centralizing mining.

The solution requires some clever innovation on proof-of-work.

Not necessarily, algorithms could be programmed to move your funds between coins if an attack threshold is passed--of course this requires trustless exchanges to fill in the gap left by the inability to decentralize mining, and also produces lemming effects if the coin's mining doesn't adjust responsively, though this assumes the attacker is just greedy and not actually trying to destroy the coin. In a full-on attack of crypto,  a war to end the battles,  the algorithm approach forces the attack to be multi-pronged, but would not prevent coins from attacking each other to gain market share, though my guess is miners would make algorithm adjustments and speed becomes an asset as much as hashing power--I'm rambling possabilities, my guess is that there some noob assumptions to flesh out, if not totally dismiss.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on April 20, 2016, 09:29:29 PM
This brings us back to the Cryptonote adaptive blocksize limit combined with a tail emission found in Monero where:
1) The cost of mining a block is set by the block subsidy

Correct, meaning the amount of hashrate miners spend will be equal to the block subsidy[1] (where block subsidy will ultimately be Monero's perpetual tail reward which is necessarily a fixed # of coins (https://bitcointalk.org/index.php?topic=753252.msg14578382#msg14578382)), because (as I pointed out in our prior discussion (https://bitcointalk.org/index.php?topic=1183043.msg13842824#msg13842824)) transaction fees will trend to costs, due to that the median block size MN will trend upwards to match market demand and thus there is no pricing power on transaction fees.

[1] Note this means the tail reward security of Monero will be very weak and insufficient.

2) The total amount in fees per block has to rise to a number comparable to, but most of the time smaller, than the block subsidy.

You wrote that before in our prior discussion:

The reason the above two scenarios do not apply to a Cryptonote coin with a tail emission such a Monero becomes apparent when one considers the economics of the total block reward components of fees and base reward (new coin emission). If the total in fees per block significantly exceed the base reward then it becomes economically attractive for miners to burn coins to the penalty by mining larger blocks. The block size rises until the total fees per block fall below a level where it is uneconomic for the miners to pay the penalty by increasing the blocksize. This level is comparable to the base reward. It is at this point where the need for a tail emission becomes clear, since without the tail emission the total block reward (fee plus base reward) would go to zero.

And it still doesn't make any sense to me. The block size will trend upwards to match transaction demand, because the penalty is driven to 0 as the median block size increases as  miners can justify burning some of the transaction fees to the penalty. That drives the median block size upwards, which drives the penalty to 0 again. The median block size doesn't have any incentive to decrease again, thus transaction fees then fall to costs.

Sorry as I told you before, Monero does not solve the Tragedy of the Commons in Satoshi's design. It does adaptively increase the block size while preventing spam surges.

I doubt John Conner's design has achieved any better, because as I explained at our prior discussion, there is no decentralized solution to that Tragedy of the Commons in the current proof-of-work designs. I have a solution, but it is a very radical change to the proof-of-work design that relies on unprofitable mining by payers.

As far as I can see, Monero has not solved the Tragedy of the Commons in Satoshi's design. I reiterated my rebuttal to ArticMine:

https://bitcointalk.org/index.php?topic=1441959.msg14599446#msg14599446

Clarification:

Security of a coin will be very tied to its transaction rate × average transaction size, i.e. velocity adoption and wealth of the velocity. The problem I have with the fixed size tail reward as compared to the design I am contemplating is that tail reward only captures those metrics indirectly through exchange price appreciation. I am not sure if the two models are equally powerful. I will need to think more deeply about it. My design also has an orthogonal tail reward.

Edit: some aspects of Monero's tail reward and block size adjustment algorithm are analogous to aspects of my design. There are some other things I didn't mention. I will need to really take the time to distil this into a carefully written white paper. So I would caution readers not to form any concrete conclusions (either for or against any design mentioned here) from these vague discussions.

BTW, I would suggest that Tragedy of the Commons is an ineffective analogy for explaining whatever it is you are trying to explain because obviously-intelligent people such as ArticMine don't understand it. It may be that you are entirely correct, but if you want to communicate effectively you need a differently-worded explanation.

Agreed at the appropriate time. I deem it necessary to be vague since I am months (or moar!) away from implementing my design.

The fundamental issue here is that transaction fees should not be seen as a way to secure a Cryptonote coin such as Monero. The security of the coin is based on the base reward. Now that does not mean tha transaction fees will tend to zero and stay there. In fact one would expect the total transaction fees per block to reach for the most part an equilibrium at some fraction of the block reward. To understand this we must understand that while transaction fees are needed to overcome the penalty in order increase the blocksize, there is no rebate on penalty when the blocksize falls. This means that just normal fluctuation in transaction demand will require a significant fraction of the block reward in transaction fees. The median blocksize adjustment time is less than a day in Monero. In practice one would expect total transaction fees per block to temporarily approach or even slightly exceed the block reward if there is a sharp rise in transaction demand, and conversely it is possible for transaction fees to temporarily fall close to zero if there is a sharp drop in transaction demand. Since security is only as strong as its weakest point for this reason alone one cannot count on transaction fees to secure a Cryptonote coin. The implications for Monero is that the tail emission alone becomes the source for POW security. A further important conclusion is that a coin that uses a Cryptonote adaptive blocksize or something similar and does not have a tail emission or equivalent (for example demurrage) big enough to secure the coin will become insecure and fail.

In the case of Monero one must keep in mind that over time one would expect the tail emission will actually reach an equilibrium with lost coins for a given purchasing power of Monero. The assumption here is that lost coins are proportional to the the total emission for a given purchasing power and transaction velocity. One advantage that Monero has here is that monitoring Bytecoin could provide a significant early indication of the risk of too low a tail emission with a two year or more lead time. Paradoxically this early warning system works because of the Bytecoin two year premine / ninjamine. Also the Bitcoin inflation rate will also fall below that of Monero so it could also provide an advance indication of risk. As has been indicated above this risk is of course largely mitigated with exchange rate appreciation, which would normally correlate very strongly with the use of the currency.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on June 08, 2016, 06:19:46 PM
Thought of this because I came across another bitcoin block propagation network and hope that monero can be designed to avoid any type of centralizing development.

I'm jumping ahead possibly many years, but suppose we encounter scaling issues regarding transaction propagation. Presumably, there are logical means for light block propagation that has been described elsewhere. But what if we encounter the fact that transactions do not propagate quickly enough? Monero transactions are larger than your average cryptocurrency, and its unknown whether a ring size of 3 (mixin of 3 in the old parlance) will stay the standard. I.e., what if a ring size of 10 becomes necessary? Or 100?

So my main question is whether the current block hash is made from all of the information in the transactions (inputs, outputs, etc), or if it is made from the hash of the transaction.

If its made from the hash of the transaction, then we could implement a hash-first transaction propagation. So you could imagine the network cache would be split into more layers. There would be a hash pool (just the hashes), then the transaction pool (hashes + rest of transaction data) and then however the rest of the cache is divied up (i.e., if lightblocks are implemented there's another caching layer).

So, first the transaction hash races through the relay network with its measly size of whatever it is. 64 bytes? Every node that gets it can now start mining with that transaction in their block. Presumably, while they are mining on this transaction, the rest of the data will catch up. Hell, it could even be possible to solve a block and broadcast a block before the rest of the transaction data even arrives.

If its made from the entire transaction, then we'd first need to modify block architecture to just be a hash of the hashes.

Hrm... I see it now. The main problem is validation. But then again, if the transaction data associated with a given hash turns out to be crap, then the transaction data isn't stored in the blockchain. So at worst you have empty hash placeholders in the blockchain (which could be pruned), so you sacrifice blockchain bloat (and potentially grease the ability to spam the network) for transaction speed propagation.

Unless there was some way to determine if a hash is valid. But then we're defeating the whole purpose of unidirectional mathematics (or whatever the term is called).



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: aminorex on July 07, 2016, 06:58:30 PM
Post-quantum stuff:

https://eprint.iacr.org/2015/1092.pdf
https://eprint.iacr.org/2016/659.pdf


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on July 15, 2016, 08:26:40 PM
Okay. First things first - I deleted a lot of posts during the TPTB_need_war threadsplosion that is best summarized here -
https://bitcointalk.org/index.php?topic=1139756.msg13634887#msg13634887

there are remnants of the discussion still in there, because some contain some good tidbits and frankly I just got lazy. Deleting posts one by one is a bitch.

After some pruning of this thread I want to start things up with the auto adjusting transaction fees.

@arcticmine, I remember that you were going to lead this thought train. I'm curious if you've had any insights into things?

During the last dev meeting, you mentioned that the minimum block inclusion fee (what we commonly think of as the transaction fee) would be based on a calculation that uses the block size as the independent variable. However, in your original musings on the topic, you suggested that the difficulty would be used

https://bitcointalk.org/index.php?topic=1139756.msg12199259#msg12199259

and edited for the final post:

There should be a factor for block subsidy.


Yes. In order for the difficulty to be a surrogate for the value of XMR one needs to factor in the actual block reward, So the formula would look as follows:

F = F0*D0*RA/(DA*R0)

Where

F0 is a constant comparable to the current per kb fee
R0 is a constant comparable to the current block reward
RA is the average block reward of the blocks N-2K to N-K
D0 is a constant comparable to the current difficulty
DA is the average difficulty of the blocks N-2K to N-K
N is the last block number
K is a constant for example 1000  

One could replace RA by the average of base block reward namely by netting out fees and penalties; however then it would not track value as closely particularly if fees were to dominate the actual block reward in the future.

The question of the responsiveness of the difficulty to a sudden increase in price is valid to a degree; however it is tempered by the fact that the increase would be temporary, the increase would hit XMR holders at a time when they are experiencing a significant increase in the value of their XMR,  and in the worst case scenario any drop in transaction volume would likely temper the sudden price movement.  Attacks based on manipulating the hash rate I do not see as an issue here simply because any impact on fees is extremely minor compared to the risk of a 51% type attack.

My main concern here is to not create a situation similar to what is currently happening in Bitcoin with the blocksize debate. It is fairly easy to get the necessary consensus to fork an 18 month old coin, it is quite another matter to try to undo the fork 5 years later; particularly when there is no immediate threat. This is the critical lesson we must learn form Bitcoin.    

The discussion initiated by that post was good, and I recommend re-reading.

We could also use the data from the recent price spike and the subsequent difficulty adjustment to see how the fee would have played out. I'm using the queenly we here and should be smacked for doing so. Maybe I'll play around with excel in a bit.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on July 16, 2016, 01:18:48 AM
Thanks for reviving this thread and the old discussions from last summer. My thoughts on this have evolved considerably since then largely due to gaining a further understanding of the adaptive blocksize limit in Monero and how it impacts fees. In the following post I discussed the response of the adaptive blocksize limit to a spam attack and this is critical in understanding what determines fees in Monero ove the long term.

ArticMine PMed me after I wrote that flaming post (https://bitcointalk.org/index.php?topic=753252.msg13569101#msg13569101), and said he would reply after studying my posts. He has not yet replied. Does that mean I am correct and there is no solution for Monero. I think so.

It is fundamental. Afaics, you'd have to completely rewrite Moaneuro. :P

Rewrite Monero, is not necessary at all but some documentation on how the Cryptonote adaptive blocksize limits actually work is needed, especially given the formula in section 6.2.3 of the Cryptonote Whitepaper is wrong. https://cryptonote.org/whitepaper.pdf (https://cryptonote.org/whitepaper.pdf). My response will come in time.

I will start by examining the Cryptonote Penalty Function for oversize blocks. This is critical to understand any form of spam attack against a Cryptonote coin. From the Cryptonote whitepaper I cited above the penalty function is:

Penalty = BaseReward (BlkSize / MN - 1)2

The new reward is:

NewReward = BaseReward - Penalty

Where MN is the median of the blocksize over the last N blocks
BlkSize is the size of the current block
BaseReward is the reward as per the emission curve or where applicable the tail emission
NewReward is the actual reward paid to the miner
The Maximum allowed blocksize, BlkSize, is 2MN
The penalty is only applied when BlkSize > (1 + Bmin) MN Where 0 < Bmin < 1 In the Cryptonote whitepaper Bmin = 0.1.
 
The error in the Cryptonote Whitepaper was to set NewReward = Penalty

For simplicity I will define:
BlkSize = (1+B) MN
BaseReward = Rbase
Penalty (for a given B) = PB
NewReward (for a given B) = RB

The penalty for a given B becomes:
PB = RbaseB2
While the new reward for a given B becomes:
RB = Rbase(1 - B2)
The first derivative of PB with respect to B is
dPB / dB = 2RbaseB

In order to attack the coin by bloating the blocksize the attacker needs to cause at least over 50% of the miners to mine oversize blocks and for an expedient attack close to 100% or the miners to mine oversize blocks. This attack must be a maintained over a sustained period of time and more importantly must be maintained in order to keep the oversized blocks, since once the attack stops the blocks will fall back to their normal size.  There are essentially two options here:

1) A 51% attack. I am not going to pursue this for obvious reasons.

2) Induce the existing miners to mine oversize blocks. This is actually the more interesting case; however after cost analysis it becomes effectively a rental version of 1 above. Since the rate of change (first derivative) of PB is proportional to B the most effective option for the attacker is to run the attack with B = 1. The cost of the attack has as a lower bound Rbase but would be higher, and proportional to, Rbase  because miners will demand a substantial premium over the base reward to mine the spam blocks due to the increased risk of orphan blocks as the blocksize increases and competition from legitimate users whose cost per KB for transaction fees needed to compete with the attacker will fall as the blocksize increases. The impact on the coin is to stop new coins from being created while the attack is going on. These coins are replaced by the attacker having to buy coins on the open market in order to continue the attack. The impact of this is to further increase the costs to the attacker.

It at this point where we see the critical importance of a tail emission since if Rbase = 0 this attack has zero cost and the tragedy of the commons actually occurs. This is the critical difference between those Cryptonote coins that have a tail emission, and have solved the problem, such as Monero and those that do not, and will in a matter of time become vulnerable, such as Bytecoin.




There are two very distinct scenarios I see for setting fees in Monero.

The first scenario is when the  median of the blocksize over the last N blocks, MN < effective median of the blocksize over the last N blocks, M0 = 60000 bytes. This is the current  scenario and not the scenario I discuss above, since MN is replaced above by M0 eliminating the penalty for a blocksize increase. The current MN is less than 300 bytes.

The second scenario is where MN > M0 and the penalty applies. In this scenario per KB fees are proportional to the base reward divided by the median of the blocksize over the last N blocks, Rbase/MN. The objective here is to stimulate an efficient market between blocksize scaling and fees. One way to achieve this is to set 5 fee levels corresponding to transaction confirmation priorities. minimal, low,  normal, high, very high. The per KB fees are calculated using the known values of the block reward, and, the median of the block size. One places  minimal and low below the penalty boundary,  normal at the penalty boundary and high and very high above the penalty boundary.

The prior discussion in this thread can in reality only be possibly applied to the first scenario for the minimal fee and the low fee. The rest of the fees would have to be set as above but using M0 rather than MN.

Edit: One important consideration is that there is a 200x increase in transaction volume and possible corresponding increase in price before we trigger the second scenario, so we will have to scale down the fees with no trigger of the penalty. It is here where my original idea as edited and modified above may be a possibility to set the low minimal and low fees.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on July 16, 2016, 02:08:51 AM
( snip )
There are two very distinct scenarios I see for setting fees in Monero.

The first scenario is when the  median of the blocksize over the last N blocks, MN < effective median of the blocksize over the last N blocks, M0 = 60000 bytes. This is the current  scenario and not the scenario I discuss above, since MN is replaced above by M0 eliminating the penalty for a blocksize increase. The current MN is less than 300 bytes.

The second scenario is where MN > M0 and the penalty applies. In this scenario per KB fees are proportional to the base reward divided by the median of the blocksize over the last N blocks, Rbase/MN. The objective here is to stimulate an efficient market between blocksize scaling and fees. One way to achieve this is to set 5 fee levels corresponding to transaction confirmation priorities. minimal, low,  normal, high, very high. The per KB fees are calculated using the known values of the block reward, and, the median of the block size. One places  minimal and low below the penalty boundary,  normal at the penalty boundary and high and very high above the penalty boundary.

The prior discussion in this thread can in reality only be possibly applied to the first scenario for the minimal fee and the low fee. The rest of the fees would have to be set as above but using M0 rather than MN.

Edit: One important consideration is that there is a 200x increase in transaction volume and possible corresponding increase in price before we trigger the second scenario, so we will have to scale down the fees with no trigger of the penalty. It is here where my original idea as edited and modified above may be a possibility to set the low minimal and low fees.

I'm confused, though I think I'm putting it together. I see you're threading the needle of making the fee adaptive while giving the fee the ability to serve its intended purpose of preventing blocksize expansion. I think you were on to something before with using the difficulty as an on-chain surrogate of external value. I think the need for that factor will exist at any stage of the chain's life - during initial distribution curve and during the tail emission. Your second scenario above seems focused on the tail emission portion of the coins existence, which doesn't happen for another ... however many years.




Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on July 16, 2016, 03:26:23 AM
...
I'm confused, though I think I'm putting it together. I see you're threading the needle of making the fee adaptive while giving the fee the ability to serve its intended purpose of preventing blocksize expansion. I think you were on to something before with using the difficulty as an on-chain surrogate of external value. I think the need for that factor will exist at any stage of the chain's life - during initial distribution curve and during the tail emission. Your second scenario above seems focused on the tail emission portion of the coins existence, which doesn't happen for another ... however many years.




No the second scenario is determined by the median blocksize becoming larger than 60,000 bytes, not Monero reaching the tail emission. One can consider Bitcoin here as a special case where the penalty is infinite and with a fixed blocksize of 300,000 bytes. This was to a large degree the situation with Bitcoin until the spring of 2013.  

In either case the top three fee tiers would be determined by the blocksize penalty formula. There is really not much choice here if these fee tiers are going to challenge the blocksize penalty formula.  There is no reason, however, why the bottom two fee tiers could be not based on a difficulty ratio as was my original idea even after tail emission. The one proviso here would be to cap the lower two tiers at some percentages of the third tier. If the difficulty were to increase with time due to much more efficient hardware this could cause a spread in the fees over time that would make the Monero transactions when there was no pressure on the blocksize very affordable.  Spammers would still be blocked by the third and higher tiers.

Here is some research on Metcalfe's Law which to a large degree does support the case for not relying solely on the blocksize scaling / penalty formula for the lower two fee tiers, since the research indicates n log (n) rather than n as the rate of growth of the value of a network with n nodes. The extrapolation here is to replace the size of the network by the number of transactions in a given period of time. http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong (http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong)

Edit 1: There was an effective maximum blocksize in Bitcoin of around 256 KB until the spring of 2013.
Edit 2: A better distinction here is between those fee tiers at the lower end that are not challenging the penalty and those above that are challenging the penalty rather than between the two scenarios I indicated before.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on July 16, 2016, 03:53:14 PM
...
I'm confused, though I think I'm putting it together. I see you're threading the needle of making the fee adaptive while giving the fee the ability to serve its intended purpose of preventing blocksize expansion. I think you were on to something before with using the difficulty as an on-chain surrogate of external value. I think the need for that factor will exist at any stage of the chain's life - during initial distribution curve and during the tail emission. Your second scenario above seems focused on the tail emission portion of the coins existence, which doesn't happen for another ... however many years.




No the second scenario is determined by the median blocksize becoming larger than 60,000 bytes, not Monero reaching the tail emission. One can consider Bitcoin here as a special case where the penalty is infinite and with a fixed blocksize of 300,000 bytes. This was to a large degree the situation with Bitcoin until the spring of 2013.  

In either case the top three fee tiers would be determined by the blocksize penalty formula. There is really not much choice here if these fee tiers are going to challenge the blocksize penalty formula.  There is no reason, however, why the bottom two fee tiers could be not based on a difficulty ratio as was my original idea even after tail emission. The one proviso here would be to cap the lower two tiers at some percentages of the third tier. If the difficulty were to increase with time due to much more efficient hardware this could cause a spread in the fees over time that would make the Monero transactions when there was no pressure on the blocksize very affordable.  Spammers would still be blocked by the third and higher tiers.

Here is some research on Metcalfe's Law which to a large degree does support the case for not relying solely on the blocksize scaling / penalty formula for the lower two fee tiers, since the research indicates n log (n) rather than n as the rate of growth of the value of a network with n nodes. The extrapolation here is to replace the size of the network by the number of transactions in a given period of time. http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong (http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong)

Edit 1: There was an effective maximum blocksize in Bitcoin of around 256 KB until the spring of 2013.
Edit 2: A better distinction here is between those fee tiers at the lower end that are not challenging the penalty and those above that are challenging the penalty rather than between the two scenarios I indicated before.

I'm going to copy the formula you posted earlier to make sure I get it -

For simplicity I will define:
BlkSize = (1+B) MN
BaseReward = Rbase
Penalty (for a given B) = PB
NewReward (for a given B) = RB

I'm assuming that B is the current block? Is it the size of the block in bytes? kb? I think its the 1 + B that is throwing me off. Does that just mean "the next block"?

The penalty for a given B becomes:
PB = RbaseB2
While the new reward for a given B becomes:
RB = Rbase(1 - B2)
The first derivative of PB with respect to B is
dPB / dB = 2RbaseB

I apologize that I can't parse the equations. Its even worse as I type now with the bbcode or whatever formatting it is.

Quote
Here is some research on Metcalfe's Law which to a large degree does support the case for not relying solely on the blocksize scaling / penalty formula for the lower two fee tiers, since the research indicates n log (n) rather than n as the rate of growth of the value of a network with n nodes. The extrapolation here is to replace the size of the network by the number of transactions in a given period of time. http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong (http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong)

So if I'm reading this right, you're using hashrate as an indicator of the size of the network? Or just use something other than blocksize scaling / penalty formula for the first two tiers?

I apologize for being obtuse in my understanding.

The way that I understand things is that we need a way to match the internal cost (xmr) with the external value of xmr for adding data to the blockchain. With the multi-tiered system you propose, something is used to adjust the xmr cost for the first two tiers, and then for tiers 3,4,5 , it uses a component of the block size penalty. I think what I'm not getting is that as the network becomes more valuable, the internal xmr cost has to go down to maintain the usability of the network. Now, if the change in the transaction fee is coupled to a transaction priority system and is dependent on the block penalty... wouldn't all those things imply that the transaction fee is increasing?


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on July 16, 2016, 06:28:27 PM
...

I'm going to copy the formula you posted earlier to make sure I get it -

For simplicity I will define:
BlkSize = (1+B) MN
BaseReward = Rbase
Penalty (for a given B) = PB
NewReward (for a given B) = RB

I'm assuming that B is the current block? Is it the size of the block in bytes? kb? I think its the 1 + B that is throwing me off. Does that just mean "the next block"?

The penalty for a given B becomes:
PB = RbaseB2
While the new reward for a given B becomes:
RB = Rbase(1 - B2)
The first derivative of PB with respect to B is
dPB / dB = 2RbaseB

I apologize that I can't parse the equations. Its even worse as I type now with the bbcode or whatever formatting it is.

Quote
Here is some research on Metcalfe's Law which to a large degree does support the case for not relying solely on the blocksize scaling / penalty formula for the lower two fee tiers, since the research indicates n log (n) rather than n as the rate of growth of the value of a network with n nodes. The extrapolation here is to replace the size of the network by the number of transactions in a given period of time. http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong (http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong)

So if I'm reading this right, you're using hashrate as an indicator of the size of the network? Or just use something other than blocksize scaling / penalty formula for the first two tiers?

I apologize for being obtuse in my understanding.

The way that I understand things is that we need a way to match the internal cost (xmr) with the external value of xmr for adding data to the blockchain. With the multi-tiered system you propose, something is used to adjust the xmr cost for the first two tiers, and then for tiers 3,4,5 , it uses a component of the block size penalty. I think what I'm not getting is that as the network becomes more valuable, the internal xmr cost has to go down to maintain the usability of the network. Now, if the change in the transaction fee is coupled to a transaction priority system and is dependent on the block penalty... wouldn't all those things imply that the transaction fee is increasing?

No apologies needed. You are not being obtuse, in fact, on the contrary, you are being very helpful. These are far from easy concepts to understand and explain. Furthermore I am not aware of any other crypto currency that has even seriously looked at these issues let alone proposed a solution or set of solutions, even though these are issues that every crypto currency faces.

You are of course correct. that "we need a way to match the internal cost (xmr) with the external value of xmr for adding data to the blockchain".

Monero however imposes a significant linear correlation between the internal cost (xmr) and the external value of adding data to the blockchain by virtue of the penalty function for blocksize scaling. I say linear because for a given base reward the cost in xmr adding a particular transaction of a given size in KB to a particular part of the penalty area falls liniarly with the the median of the blocksize. For example if MN is 10x larger the cost per transaction falls by a factor of 10 since there are 10x as many transactions paying for a given amount of penalty. In this example I am assuming the transactions are all of the same size for simplicity.

B is the relative  increase in the block size over the median blocksize, It can range from 0 (no increase) to 1 (100% increase / doubling of the blocksize). The critical point here is that B attracts the same penalty in XMR for an increase in blocksize from say 1 MB to 1,1 MB than for say 1 GB to say 1.1 GB, since in both cases B = 0.1. In the latter case there are 1024x more transactions to absorb the cost of the penalty so the cost per transaction falls by a factor of 1024. Again I am assuming for simplicity the same distribution by size of transactions in the 1.1 MB and 1.1 GB blocks.

Now here is where it can get interesting. If the natural relationship between network value and network size is say MN log (MN) rather than MN then it is possible for the cost of a transaction, in real terms, in the penalty area to rise with log (MN) at least for a period of time. This can happen if for example market responds to this difference by optimizing transactions in order to minimize paying the penalty. This would occur because all transactions do not have the same priority. It is for this reason that there can be a very significant merit to use a different scaling formula (difficulty adjusted for block reward) for the low tier fee levels than for the high tier fee levels where the fees are effectively set by the base reward and median blocksize.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smoothie on July 16, 2016, 07:51:09 PM
One of the ideas I had which I'm not sure how much demand there will be for is a selectable list of fees & mixing for a tx.

It would calculate the tx fee based on different mixins and you could select the "best" tx fee for the best mixin.

I've noticed this is all determined on the fly as outputs to mix with are selected in a "random" fashion using a triangular distribution (at least thats what I remember after reviewing the source).

But one thing I noticed is that the tx fee can actually be higher with a lower mixin than a higher mixin in some cases.

In terms of efficiency I can see people wanting to see a range of mixins and fees that go along with it when creating the tx and select the one they want to actually execute based on their determination of what is "efficient" for them.

This may not end up saving users much money on tx fees now but it could in the long term if they can see a range of mixins/fees to pick from.

Obviously this would make the most sense to incorporate into a GUI and not try to do this via command line, as it would be that more confusing to the average user.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on July 17, 2016, 12:35:53 AM
...

I'm going to copy the formula you posted earlier to make sure I get it -

For simplicity I will define:
BlkSize = (1+B) MN
BaseReward = Rbase
Penalty (for a given B) = PB
NewReward (for a given B) = RB

I'm assuming that B is the current block? Is it the size of the block in bytes? kb? I think its the 1 + B that is throwing me off. Does that just mean "the next block"?

The penalty for a given B becomes:
PB = RbaseB2
While the new reward for a given B becomes:
RB = Rbase(1 - B2)
The first derivative of PB with respect to B is
dPB / dB = 2RbaseB

I apologize that I can't parse the equations. Its even worse as I type now with the bbcode or whatever formatting it is.

Quote
Here is some research on Metcalfe's Law which to a large degree does support the case for not relying solely on the blocksize scaling / penalty formula for the lower two fee tiers, since the research indicates n log (n) rather than n as the rate of growth of the value of a network with n nodes. The extrapolation here is to replace the size of the network by the number of transactions in a given period of time. http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong (http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong)

So if I'm reading this right, you're using hashrate as an indicator of the size of the network? Or just use something other than blocksize scaling / penalty formula for the first two tiers?

I apologize for being obtuse in my understanding.

The way that I understand things is that we need a way to match the internal cost (xmr) with the external value of xmr for adding data to the blockchain. With the multi-tiered system you propose, something is used to adjust the xmr cost for the first two tiers, and then for tiers 3,4,5 , it uses a component of the block size penalty. I think what I'm not getting is that as the network becomes more valuable, the internal xmr cost has to go down to maintain the usability of the network. Now, if the change in the transaction fee is coupled to a transaction priority system and is dependent on the block penalty... wouldn't all those things imply that the transaction fee is increasing?

No apologies needed. You are not being obtuse, in fact, on the contrary, you are being very helpful. These are far from easy concepts to understand and explain. Furthermore I am not aware of any other crypto currency that has even seriously looked at these issues let alone proposed a solution or set of solutions, even though these are issues that every crypto currency faces.

You are of course correct. that "we need a way to match the internal cost (xmr) with the external value of xmr for adding data to the blockchain".

Monero however imposes a significant linear correlation between the internal cost (xmr) and the external value of adding data to the blockchain by virtue of the penalty function for blocksize scaling. I say linear because for a given base reward the cost in xmr adding a particular transaction of a given size in KB to a particular part of the penalty area falls liniarly with the the median of the blocksize. For example if MN is 10x larger the cost per transaction falls by a factor of 10 since there are 10x as many transactions paying for a given amount of penalty. In this example I am assuming the transactions are all of the same size for simplicity.

B is the relative  increase in the block size over the median blocksize, It can range from 0 (no increase) to 1 (100% increase / doubling of the blocksize). The critical point here is that B attracts the same penalty in XMR for an increase in blocksize from say 1 MB to 1,1 MB than for say 1 GB to say 1.1 GB, since in both cases B = 0.1. In the latter case there are 1024x more transactions to absorb the cost of the penalty so the cost per transaction falls by a factor of 1024. Again I am assuming for simplicity the same distribution by size of transactions in the 1.1 MB and 1.1 GB blocks.

Now here is where it can get interesting. If the natural relationship between network value and network size is say MN log (MN) rather than MN then it is possible for the cost of a transaction, in real terms, in the penalty area to rise with log (MN) at least for a period of time. This can happen if for example market responds to this difference by optimizing transactions in order to minimize paying the penalty. This would occur because all transactions do not have the same priority. It is for this reason that there can be a very significant merit to use a different scaling formula (difficulty adjusted for block reward) for the low tier fee levels than for the high tier fee levels where the fees are effectively set by the base reward and median blocksize.

Okay. So in the equations state above, I think you're just laying out the existing penalty fee as it exists in Monero.

From an earlier post, you state

Quote
In this scenario per KB fees are proportional to the base reward divided by the median of the blocksize over the last N blocks, Rbase/MN.

I think this is the only place I see a clear exposition of how per KB fees will be calculated, and it is directly tied to Mn, so therefore the blocksize penalty mechanics are sort of encapsulated by Mn . So the base reward will eventually hit 0.6 (at current 2 minute blocktime), so at that point the the median blocksize is the main driver.

Quote
I say linear because for a given base reward the cost in xmr adding a particular transaction of a given size in KB to a particular part of the penalty area falls liniarly with the the median of the blocksize. For example if MN is 10x larger the cost per transaction falls by a factor of 10 since there are 10x as many transactions paying for a given amount of penalty. In this example I am assuming the transactions are all of the same size for simplicity.
With this I can see where your head is at - basically we assume that if the MN has grown, the network has become more valuable due to the increase in activity. The increased external valuation has to be countered with an internal decrease in the xmr cost to add to the chain, though it must be done with the proper safeguards to prevent bloat attack (or something). In your approach, this is done by directly scaling the blockchain fee with the penalty, which is directly coupled to MN.

So, in practice (for humans using the network), we will have a statistic that details MN in real time, and the software client will make suggestions based on MN for per KB fees.

Meanwhile, the consensus protocol creating a given block will have calculated MN for a given span, determined the optimal size of the block to create. If the optimal size of the block to create is <=  the previous block, any transaction meeting the current blockchain-fee threshold can be included in that block. If the optimal size of the block to create is > the previous block, any transaction meeting the current blockchain-fee threshold can be included in addition to transactions with lower fees.

My brain hurts just imagining how the optimal block size calculations will look.

I wonder if we should pull the training wheels with the 60 kb thing and just let the protocol do its thing, then we could see how the blocksize penalty actually affects things. Though that would force miners to not mine any transactions until the mempool is stuffed. Hrm...


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on July 17, 2016, 02:26:20 AM
...

Okay. So in the equations state above, I think you're just laying out the existing penalty fee as it exists in Monero.

From an earlier post, you state

Quote
In this scenario per KB fees are proportional to the base reward divided by the median of the blocksize over the last N blocks, Rbase/MN.

I think this is the only place I see a clear exposition of how per KB fees will be calculated, and it is directly tied to Mn, so therefore the blocksize penalty mechanics are sort of encapsulated by Mn . So the base reward will eventually hit 0.6 (at current 2 minute blocktime), so at that point the the median blocksize is the main driver.

Quote
I say linear because for a given base reward the cost in xmr adding a particular transaction of a given size in KB to a particular part of the penalty area falls liniarly with the the median of the blocksize. For example if MN is 10x larger the cost per transaction falls by a factor of 10 since there are 10x as many transactions paying for a given amount of penalty. In this example I am assuming the transactions are all of the same size for simplicity.
With this I can see where your head is at - basically we assume that if the MN has grown, the network has become more valuable due to the increase in activity. The increased external valuation has to be countered with an internal decrease in the xmr cost to add to the chain, though it must be done with the proper safeguards to prevent bloat attack (or something). In your approach, this is done by directly scaling the blockchain fee with the penalty, which is directly coupled to MN.

So, in practice (for humans using the network), we will have a statistic that details MN in real time, and the software client will make suggestions based on MN for per KB fees.

Meanwhile, the consensus protocol creating a given block will have calculated MN for a given span, determined the optimal size of the block to create. If the optimal size of the block to create is <=  the previous block, any transaction meeting the current blockchain-fee threshold can be included in that block. If the optimal size of the block to create is > the previous block, any transaction meeting the current blockchain-fee threshold can be included in addition to transactions with lower fees.

My brain hurts just imagining how the optimal block size calculations will look.

I wonder if we should pull the training wheels with the 60 kb thing and just let the protocol do its thing, then we could see how the blocksize penalty actually affects things. Though that would force miners to not mine any transactions until the mempool is stuffed. Hrm...

Setting the fees in the penalty area is actually the easy part.

It is coming up with an optimal or at least close to optimal algorithm for determining the optimal transactions to include in a block from the miner's point of view that I am still wrapping my head around. I do have some ideas at this point but nothing concrete yet. By the way we have had already blocks that have triggered the penalty going all the way back to 2014. Furthermore there was an attack in 2014 that produced a fair number of blocks into the penalty area when at the time M0 was 20,000 K. This was before the recent fork to 2 min blocks. So in effect we already have experience with the training wheels taken off. While the penalty works as designed the economics are very far from optimal with only one fixed fee tier that is approximately in the low end of the penalty range and basically little if any optimization on the miner side.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on July 17, 2016, 02:51:34 AM
...

Okay. So in the equations state above, I think you're just laying out the existing penalty fee as it exists in Monero.

From an earlier post, you state

Quote
In this scenario per KB fees are proportional to the base reward divided by the median of the blocksize over the last N blocks, Rbase/MN.

I think this is the only place I see a clear exposition of how per KB fees will be calculated, and it is directly tied to Mn, so therefore the blocksize penalty mechanics are sort of encapsulated by Mn . So the base reward will eventually hit 0.6 (at current 2 minute blocktime), so at that point the the median blocksize is the main driver.

Quote
I say linear because for a given base reward the cost in xmr adding a particular transaction of a given size in KB to a particular part of the penalty area falls liniarly with the the median of the blocksize. For example if MN is 10x larger the cost per transaction falls by a factor of 10 since there are 10x as many transactions paying for a given amount of penalty. In this example I am assuming the transactions are all of the same size for simplicity.
With this I can see where your head is at - basically we assume that if the MN has grown, the network has become more valuable due to the increase in activity. The increased external valuation has to be countered with an internal decrease in the xmr cost to add to the chain, though it must be done with the proper safeguards to prevent bloat attack (or something). In your approach, this is done by directly scaling the blockchain fee with the penalty, which is directly coupled to MN.

So, in practice (for humans using the network), we will have a statistic that details MN in real time, and the software client will make suggestions based on MN for per KB fees.

Meanwhile, the consensus protocol creating a given block will have calculated MN for a given span, determined the optimal size of the block to create. If the optimal size of the block to create is <=  the previous block, any transaction meeting the current blockchain-fee threshold can be included in that block. If the optimal size of the block to create is > the previous block, any transaction meeting the current blockchain-fee threshold can be included in addition to transactions with lower fees.

My brain hurts just imagining how the optimal block size calculations will look.

I wonder if we should pull the training wheels with the 60 kb thing and just let the protocol do its thing, then we could see how the blocksize penalty actually affects things. Though that would force miners to not mine any transactions until the mempool is stuffed. Hrm...

Setting the fees in the penalty area is actually the easy part.

It is coming up with an optimal or at least close to optimal algorithm for determining the optimal transactions to include in a block from the miner's point of view that I am still wrapping my head around. I do have some ideas at this point but nothing concrete yet. By the way we have had already blocks that have triggered the penalty going all the way back to 2014. Furthermore there was an attack in 2014 that produced a fair number of blocks into the penalty area when at the time M0 was 20,000 K. This was before the recent fork to 2 min blocks. So in effect we already have experience with the training wheels taken off. While the penalty works as designed the economics are very far from optimal with only one fixed fee tier that is approximately in the low end of the penalty range and basically little if any optimization on the miner side.

Re: bolded section - I would argue that we shouldn't invest too much into optimizing this. This is an algorithm that the market will figure out. Pool ops will claim they have a better block maker than other pool ops, etc. etc.

Indeed I know we've seen blocks trigger the penality, but we haven't seen a movement out of the median.

Ultimately I think I like what you're putting down, but for it to have any effect on things now we'd either have to remove the 60 kb psuedo window thing or come up with something different for the current era of pure speculation without significant financial activity on the chain.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on July 17, 2016, 03:17:07 AM
Indeed I know we've seen blocks trigger the penality, but we haven't seen a movement out of the median.

We have on multiple occasions. Not with the sort of smarter algorithms that ArticMine discusses though.

I'm not sure I agree we can ignore miner optimization. Such behavior may affect the overall incentives. Transaction creators and miners are in a sort of conversation, with fees as the language. You probably can't understand one side of this conversation, or the conclusion of it, without considering the other.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on July 17, 2016, 03:51:11 AM
Indeed I know we've seen blocks trigger the penality, but we haven't seen a movement out of the median.

We have on multiple occasions. Not with the sort of smarter algorithms that ArticMine discusses though.

I'm not sure I agree we can ignore miner optimization. Such behavior may affect the overall incentives. Transaction creators and miners are in a sort of conversation, with fees as the language. You probably can't understand one side of this conversation, or the conclusion of it, without considering the other.


miner optimization as in miners creating better algos to include transactions in blocks to reap most reward?

So this gives us some choices - we either think of the most optimal block inclusion algo based on the parameters set in place by the auto-fee adjuster (can we give this thing a name?), or we make the auto-fee adjuster parameters such that gaming them doesn't totally bork the system.

edited to add - so its engineering perfection vs. fault tolerance.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: smooth on July 17, 2016, 05:52:40 AM
Indeed I know we've seen blocks trigger the penality, but we haven't seen a movement out of the median.

We have on multiple occasions. Not with the sort of smarter algorithms that ArticMine discusses though.

I'm not sure I agree we can ignore miner optimization. Such behavior may affect the overall incentives. Transaction creators and miners are in a sort of conversation, with fees as the language. You probably can't understand one side of this conversation, or the conclusion of it, without considering the other.


miner optimization as in miners creating better algos to include transactions in blocks to reap most reward?

So this gives us some choices - we either think of the most optimal block inclusion algo based on the parameters set in place by the auto-fee adjuster (can we give this thing a name?), or we make the auto-fee adjuster parameters such that gaming them doesn't totally bork the system.

In either case, you still have to consider what strategies might be used.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on July 18, 2016, 02:27:28 PM
I dunno why this is in my head. I think because I recently saw xthin blocks mentioned somewhere.

But I'm curious if another level of thinness, and speed, could be achieved in a blockchain consensus network.

As with the thin block approaches, an index of transactions is used - a hash of each transaction. However, instead of a daemon sending out this index and then forcing a downstream daemon to request the transactions to flesh out the block, why not just mine in the hashes? Consensus can be reached in a fraction of the time, and the actual transaction data can be filled in later. There's a block window where monero transactions have to mature anyway, so we have that window to fill in the data.

node1 - finds block, sends out meta-block of block header and transaction hashes.
node 2 - accepts block, starts solving next block. While solving, rest of block data is filled in and validated. If filled in data is not valid, block is rejected and node 2 waits to receive another block candidate or reverts to finding its own.

Obvious attack vectors are to spam with hashes that don't have corresponding transactions. Would also need to come up with someway to prevent things from being filled in if they shouldn't be filled in. E.g., in the meta block there's a hash of something not in the mempool - anyone's mempool at time of block forming (a malicious miner just put it in there). Meta block gets added, nodes then request data to fill that index. Malicious miner then transmits matching index transaction...

I guess my point is there exists a certain network buffer due to possibility of re-organizations, and we can use this buffer to increase the speed of consensus state independent of the transfer of data.

hashes are so cool.

it seems that the only thing, block protocol wise, that would need to be changed is the how the block ID is hashed. Instead of the transaction, the merkle root is just hashed from the transaction hash.

5. Calculation of Block Identifier

   The identifier of a block is the result of hashing the following data
   with Keccak:

      - size of [block_header, Merkle root hash, and the number of
        transactions] in bytes (varint)

      - block_header,

      - Merkle root hash,

      - number of transactions (varint).

   The goal of the Merkle root hash is to "attach" the transactions
   referred to in the list to the block header: once the Merkle root
   hash is fixed, the transactions cannot be modified.


5.1  Merkle Root Hash Calculation

   Merkle root hash is computed from the list of transactions as
   follows: let tx be the i-th transaction in the block, where 0 <= i
   <= n-1 (n is the number of transactions) and tx[0] is the base
   transaction. Let m be the largest power of two, less than or equal to
   n. Define the array h as follows:

      h = H(h[2*i] || h[2*i+1])
        where 1 <= i <= m-1 or 3*m-n <= i <= 2*m-1.
      h = H(tx[i-m])
        where m <= i <= 3*m-n-1
      h = H(tx[i-4*m+n])
        where 6*m-2*n <= i <= 4*m-1.

from: https://cryptonote.org/cns/cns003.txt


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on July 21, 2016, 02:08:47 PM
i can't help it, i drink coffee.

Peerfriends Metanode Clusters-

what: self organizing nodes clusters that share blockchain and transaction data (somehow).

how - the daemon monitors peers, finds ones that send good data and have good connectivity.

The clusters will self organize. Individual nodes maintain block header chains, request data from peerfriends if needed. Small cluster sizes (relative to network perhaps) will ensure that blockchain data doesn't "go missing". Peerfriends share information about their peer connections, so there is no overlap. In this way, each member of the peerfriend cluster will be less likely to receive duplicate information, because they are receiving information from different parts of the network. This lightens the load for individual nodes, because it reduces the amount of transactions coming in per node. May also reduce the cost of relaying larger blocks.

https://sketch.io/render/sk-20f9bce148a597ef2356ffed28c0c962.jpeg

btw, this website is awesome: https://sketch.io/sketchpad/


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on July 25, 2016, 11:56:51 PM

For simplicity I will define:
BlkSize = (1+B) MN
BaseReward = Rbase
Penalty (for a given B) = PB
NewReward (for a given B) = RB

Where MN = the median size of the last N blocks.

The penalty for a given B becomes:
PB = RbaseB2
While the new reward for a given B becomes:
RB = Rbase(1 - B2)
The first derivative of PB with respect to B is
dPB / dB = 2RbaseB

Where B is the relative increase in the block size over the median blocksize

-------

@arcticmine, would you say all the above equations are your current state of thinking on the matter? If so, I might start toying around with some simulations / scenarios for determining optimum block inclusion algos. I might stick with bash, but it could end up in R. Either or, it'll be on github. I just wanna be sure I got the right formulars before I dive into it. I figure I'll use a toy data set first, and then try it against a real data set (I think one of us linked to litecoin dataset somewhere, and bitcoin should be obtainable).


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on July 26, 2016, 01:54:00 AM
Yes. The above is my current understanding.

This is actually a discrete optimization problem.

For a small number of transactions one can test all the possibilities to find the optimum; however this may become computationally expensive to the miner if the number of transactions to include becomes very large.

For a very large number of transactions, with each individual transaction size very small when compared to the blocksize, a simple solution would be to add transactions in order of per KB fees, while testing each addition for an increase in revenue (total fees - penalty). When adding transactions causes a decrease in revenue, stop adding transactions.

It may be simplest to choose between one of the two cases above depending on the number of transactions and their individual size relative to the block size.


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: GingerAle on August 03, 2016, 04:09:13 PM
arcticmine, sorry I haven't followed up on the adaptive blocksize algo. Par for the course with my manic tendencies. Ah well.

I've been following the bitfinex hack (as I'm sure we all have), and came across the Vault Address idea: http://fc16.ifca.ai/bitcoin/papers/MES16.pdf

I find it interesting and think it would be something that would add a lot of value to Monero. Based on my limited knowledge of both Monero and this "covenant" concept, it seems like Monero could pull it off, but I'm not sure. The more basic text on the concept comes from here: http://hackingdistributed.com/2016/02/26/how-to-implement-secure-bitcoin-vaults/


Quote
Operationally, the idea is simple. You send your money to a vault address that you yourself create. Every vault address has a vault key and a recovery key. When spending money from the vault address with the corresponding vault key, you must wait for a predefined amount of time (called the unvaulting period) that you established at the time you created the vault -- say, 24 hours. When all goes well, your vault funds are unlocked after the unvaulting period and you can move them to a standard address and subsequently spend them in the usual way. Now, in case Harry the Hacker gets a hold of your vault key, you have 24 hours to revert any transaction issued by Harry, using the recovery key. His theft, essentially, gets undone, and the funds are diverted unilaterally to their rightful owner. It’s like an “undo” facility that the modern banking world relies on, but for Bitcoin.

Now, the astute reader will ask what happens when Harry is really really good, and he lies in wait to steal not just your vault key, but also your recovery key. That is, he has thoroughly pwnd you and, as far as the network is concerned, is indistinguishable from you. Vaults protect you even in this case. The recovery keys have a similar lock period, allowing you to perpetually revert every transaction Harry makes. Unfortunately, at this point, Harry can do the same and revert every transaction you make. To avoid a perpetual standoff, the recovery keys can also burn the funds, so no one gets the money. The upshot is that Harry is not going to be able to collect a dime of proceeds from his theft. And this, in turn, means that Harry is unlikely to target vaults in the first place, because there is no positive outcome where he gets to keep the proceeds.

The guts of the paper go into detail about the bitcoin scripting that is necessary... and because Monero uses less scripting, it might not work? I dunno.

The important thing, though, is that this type of blockchain technology improvement requires a hardfork, which we're all fine with in Monero.



Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: ArticMine on August 03, 2016, 11:14:03 PM
arcticmine, sorry I haven't followed up on the adaptive blocksize algo. Par for the course with my manic tendencies. Ah well.

I've been following the bitfinex hack (as I'm sure we all have), and came across the Vault Address idea: http://fc16.ifca.ai/bitcoin/papers/MES16.pdf

I find it interesting and think it would be something that would add a lot of value to Monero. Based on my limited knowledge of both Monero and this "covenant" concept, it seems like Monero could pull it off, but I'm not sure. The more basic text on the concept comes from here: http://hackingdistributed.com/2016/02/26/how-to-implement-secure-bitcoin-vaults/


Quote
Operationally, the idea is simple. You send your money to a vault address that you yourself create. Every vault address has a vault key and a recovery key. When spending money from the vault address with the corresponding vault key, you must wait for a predefined amount of time (called the unvaulting period) that you established at the time you created the vault -- say, 24 hours. When all goes well, your vault funds are unlocked after the unvaulting period and you can move them to a standard address and subsequently spend them in the usual way. Now, in case Harry the Hacker gets a hold of your vault key, you have 24 hours to revert any transaction issued by Harry, using the recovery key. His theft, essentially, gets undone, and the funds are diverted unilaterally to their rightful owner. It’s like an “undo” facility that the modern banking world relies on, but for Bitcoin.

Now, the astute reader will ask what happens when Harry is really really good, and he lies in wait to steal not just your vault key, but also your recovery key. That is, he has thoroughly pwnd you and, as far as the network is concerned, is indistinguishable from you. Vaults protect you even in this case. The recovery keys have a similar lock period, allowing you to perpetually revert every transaction Harry makes. Unfortunately, at this point, Harry can do the same and revert every transaction you make. To avoid a perpetual standoff, the recovery keys can also burn the funds, so no one gets the money. The upshot is that Harry is not going to be able to collect a dime of proceeds from his theft. And this, in turn, means that Harry is unlikely to target vaults in the first place, because there is no positive outcome where he gets to keep the proceeds.

The guts of the paper go into detail about the bitcoin scripting that is necessary... and because Monero uses less scripting, it might not work? I dunno.

The important thing, though, is that this type of blockchain technology improvement requires a hardfork, which we're all fine with in Monero.


My first comment is that reversibility is not a substitute for poor security. This applies equally to both "banks" such a exchanges and individuals. The reality, when it comes to security, is that crypto currency by design turns back the clock back over 50 years if not 100 years. So the security mindset of the 1960's or earlier is actually appropriate here. Even better the security mindset of the 1890's. If all the bank's gold was stolen the bank went bust and the depositors lost all of their deposits. End of story.  Monero, after all, is by design a form of digital gold.

Many people today would be shocked to find out that as recently, as 50 years ago, most fiat transactions were not reversible. Cash and bearer instruments were the norm. Those transactions that were reversible were only reversible for a very limited period of time. Cheques for example could only be reversed while they were in transit. Once the post office delivered the cheque and it was cashed it was no longer reversible. Reversibility of transactions on a large scale started with the use of credit cards for card not present transactions first by the telephone, fax and mail and then over the Internet. When a credit card is used over the Internet or the phone, for example, there is no way to legally bind the payer to the transaction. Even with fax or mail there is no way for the merchant to verify that the signature provided is valid since they do not see the card. This callous lack of security in the banking system has also been extended to many other transactions.  For example in Canada if one deposits a cheque in an ATM the banks do not verify that the account to which the cheque was deposited to is actually that of the named payee on the cheque.

If the above is not enough this is all compounded by the design of the most popular proprietary operating systems. Security of the end user, especially for "consumer" products is not the primary design goal. The primary design goals are instead the deterrence of copyright infringement (DRM) and the collection of information from the end user for commercial exploitation. The Microsoft Windows registry was designed primarily to make it very difficult to copy an installed application form one computer to another, while the primary design goals of the security in IOS is to prevent the copying of installed apps from one phone to another. The result of this is that most people with good reason do not trust their computers, and instead in many cases keep their crypto currencies in centralized exchanges. This to a large degree defeats the whole point of Bitcoin and to a much larger degree of Monero.

For individuals basic security starts with using a FLOSS OS such as GNU/LInux or FreeBSD. One has of course to practice sound computer security and "harden" the system. There are already hardened distributions such as Qubes OS https://www.qubes-os.org/ (https://www.qubes-os.org/), but even a mainstream GNU/Linux distribution such a Ubuntu, combined with some simple common sense security practices is orders of magnitude ahead of what Microsoft and Apple sells to consumers.

For "banks" I would start with finding a retired banker over the age of 80 preferably over the age of 90 and getting some training on the security practices that were in place in the banking system say 50 - 60 years ago. I mean lets start here with some of the basics such as multiple signatures for amounts over a certain threshold.  I mean say 10,000 USD rather than 50,000,000 USD. I mean a hot wallet with over 100,000 XBT in it? Seriously. The regulators also have a role to play here in setting standards for security and internal controls.

After my lengthy security rant, I will address the articles. The concept here is to create a time locked transaction that can be reversed by the sender for a fixed period of time. There is some merit to this. This would require multi signature transactions that are time locked. My instinct is that this could be done in Bitcoin with what the Lightening Network is proposing. Multi signatures will be possible in Monero after RingCT is implemented. I am not very clear on how time locked signatures could be implemented in Monero, so someone more expert on this than myself may be able to comment. In the meantime Monero "banks" will have to focus on preventing the "gold" from being stolen in the first place, just as in the 19th century.  


Title: Re: [XMR] Monero Improvement Technical Discussion
Post by: aminorex on September 02, 2016, 05:21:21 PM
Post-quantum motivation: https://www.technologyreview.com/s/602283/googles-quantum-dream-may-be-just-around-the-corner/

TL;DR: End of 2017, a 50-qubit machine is possible, perhaps even likely.